You can see more discussion of the episode in this Forum post.
In the episode, Christian and Rob discuss ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences — as well as:
A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
How much of humanity’s resources we should spend on improving the long-term future
How large the expected value of the continued existence of Earth-originating civilization might be
How we should respond to uncertainty about the state of the world
The state of global priorities research
And much more
If you think that there is no fundamental asymmetry between the past and the future, maybe we should be sanguine about the future — including sanguine about our own mortality — in the same way that we’re sanguine about the fact that we haven’t existed forever.
–Christian Tarsney
Key points
Practical implications of past, present, and future ethical comparison cases
Christian Tarsney: I think there’s two things that are worth mentioning. One is altruistically significant, which is, if you think that one of the things we should care about as altruists is whether people’s desires or preferences are satisfied or whether people’s goals are realized, then one important question is, do we care about the realization of people’s past goals, including the goals of past people, people who are dead now? And if so, that might have various kinds of ethical significance. For instance, I think if I recall correctly, Toby Ord in The Precipice makes this point that well, past people are engaged in this great human project of trying to build and preserve human civilization. And if we allowed ourselves to go extinct, we would be letting them down or failing to carry on their project. And whether you think that that consideration has normative significance might depend on whether you think the past as a whole has normative significance.
Robert Wiblin: Yeah. That adds another wrinkle that I guess you could think that the past matters, but perhaps if you only cared about experiences, say, then obviously people in the past can’t have different experiences because of things in the future, at least we think not. So you have to think that the kind of fixed preference states that they had in their minds in the past, it’s still good to actualize those preferences in the future, even though it can’t affect their mind in the past.
Christian Tarsney: Yeah, that’s right. So you could think that we should be future biased only with respect to experiences, and not with respect to preference satisfaction. But then that’s a little bit hard to square if you think that the justification for future bias is this deep metaphysical feature of time. If the past is dead and gone, well, why should that affect the importance of experiences but not preferences? Another reason why the bias towards the future might be practically interesting or significant to people less from an altruistic standpoint than from a personal or individual standpoint, is this connection with our attitudes towards death, which is maybe the original context in which philosophers thought about the bias towards the future. So there’s this famous argument that goes back to Epicurus and Lucretius that says, look, the natural reason that people give for fearing death is that death marks a foundry of your life, and after you’re dead, you don’t get to have any more experiences, and that’s bad.
Christian Tarsney: But you could say exactly the same thing about birth, right? So before you were born, you didn’t have any experiences. And well, on the one hand, if you know that you’re going to die in five years, you might be very upset about that, but if you’re five years old and you know that five years ago you didn’t exist, people don’t tend to be very upset about that. And if you think that the past and the future should be on a par, that there is no fundamental asymmetry between those two directions in time, one conclusion that people have argued for is maybe we should be sanguine about the future, including sanguine about our own mortality, in the same way that we’re sanguine about the past and sanguine about the fact that we haven’t existed forever. Which I’m not sure if I can get myself into the headspace of really internalizing that attitude. But I think it’s a reasonably compelling argument and something that maybe some people can do better than I can.
Fanaticism
Christian Tarsney: Roughly the problem is that if you are an expected value maximizer, which means that when you’re making choices you just evaluate an option by taking all the possible outcomes and you assign them numeric values, the quantity of value or goodness that would be realized in this outcome, and then you just take a probability-weighted sum, the probability times the value for each of the possible outcomes, and add those all up and that tells you how good the option is…
Christian Tarsney: Well, if you make decisions like that, then you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome, or you can prefer certainty of a bad outcome over an option that gives you near certainty of a very good outcome, but just a tiny, tiny, tiny probability of an astronomically bad outcome. And a lot of people find this counterintuitive.
Robert Wiblin: So the basic thing is that very unlikely outcomes that are massive in their magnitude that would be much more important than the other outcomes in some sense end up dominating the entire expected value calculation and dominating your decision even though they’re incredibly improbable and that just feels intuitively wrong and unappealing.
Christian Tarsney: Well, here’s an example that I find drives home the intuition. So suppose that you have the opportunity to really control the fate of the universe. You have two options, you have a safe option that will ensure that the universe contains, over its whole history, 1 trillion happy people with very good lives, or you have the option to take a gamble. And the way the gamble works is almost certainly the outcome will be very bad. So there’ll be 1 trillion unhappy people, or 1 trillion people with say hellish suffering, but there’s some teeny, teeny, tiny probability, say one in a googol, 10 to the 100, that you get a blank check where you can just produce any finite number of happy people you want. Just fill in a number.
Christian Tarsney: And if you’re trying to maximize the expected quantity of happiness or the expected number of happy people in the world, of course you want to do that second thing. But there is, in addition to just the counterintuitiveness of it, there’s a thought like, well, what we care about is the actual outcome of our choices, not the expectation. And if you take the risky option and the thing that’s almost certainly going to happen happens, which is you get a very terrible outcome, the fact that it was good in expectation doesn’t give you any consolation, or doesn’t seem to retrospectively justify your choice at all.
Stochastic dominance
Christian Tarsney: My own take on fanaticism and on decision making under risk, for whatever it’s worth, is fairly permissive. A weird and crazy view that I’m attracted to is that we’re only required to avoid choosing options that are what’s called first-order stochastically dominated, which means that you have two options, let’s call them option one and option two. And then there’s various possible outcomes that could result from either of those options. And for each of those outcomes, we ask what’s the probability if you choose option one or if you choose option two that you get not that outcome specifically, but an outcome that’s at least that good?
Christian Tarsney: Say option one for any possible outcome gives you a greater overall probability of an outcome at least that desirable, then that seems a pretty compelling reason to choose option one. To give maybe a simple example would be helpful. Suppose that I’m going to flip a fair coin, and I offer you a choice between two tickets. One ticket will pay $1 if the coin lands heads and nothing of it lands tails, the other ticket will pay $2 if the coin lands tails, but nothing if it lands heads. So you don’t have what’s called state-wise dominance here, because if the coin lands heads then the first ticket gives you a better outcome, $1 rather than $0. But you do have stochastic dominance because both tickets give you the same chance of at least $0, namely certainty, both tickets give you a 50% chance of at least $1, but the second ticket uniquely gives you a 50% chance of at least $2, and that seems a compelling argument for choosing it.
Robert Wiblin: I see. I guess, and in a continuous case rather than a binary one, you would have to say, well, the worst case is better in say scenario two rather than scenario one. And the one percentile case is better and the second percentile case, the median is better, or at least as good, then the best case scenario is also as good or better. And so across the whole distribution of outcomes from worst to best, with probability adding them up as percentiles, the second scenario is always equal or better. And so it would seem crazy to choose the option that is always equally as good or worse, no matter how lucky you get.
Christian Tarsney: Right. Even though there are states of the world where the stochastically dominant option will turn out worse, nevertheless the distribution of possible outcomes is better.
Robert Wiblin: Okay. So you’re saying if you compare the scenario where you get unlucky in scenario two versus lucky in scenario one, scenario one could end up better. But ex-ante, before you know whether you got lucky with the outcome or not, it was worse at every point.
Christian Tarsney: Yeah, exactly.
The scope of longtermism
Christian Tarsney: There are two motivations for thinking about this. One is a worry that I think a lot of people have — certainly a lot of philosophers have — about longtermism, which is that it has this flavor of demanding extreme sacrifices from us. That maybe, for instance, if we really assign the same moral significance to the welfare of people in the very distant future, what that will require us to do is just work our fingers to the bone and give up all of our pleasures and leisure pursuits in order to maximize the probability at the eighth decimal place or something like that of humanity having a very good future.
Christian Tarsney: And this is actually a classic argument in economics too, that the reason that you need a discount rate, and more particularly, the reason why you need a rate of pure time preference, why you need to care about the further future less just because it’s the further future, is that otherwise you end up with these unreasonable conclusions about what the savings rate should be.
Robert Wiblin: Effectively we should invest everything in the future and kind of consume nothing now. It’d be like taking all of our GDP and just converting it into more factories to make factories kind of thing, rather than doing anything that we value today.
Christian Tarsney: Yeah, exactly. Both in philosophy and in economics, people have thought, surely you can’t demand that much of the present generation. And so one thing we wanted to think about is, how much does longtermism, or how much does a sort of temporal neutrality, no rate of pure time preference actually demand of the present generation in practice? But the other question we wanted to think about is, insofar as the thing that we’re trying to do in global priorities research, in thinking about cause prioritization, is find the most important things and draw a circle around them and say, “This is what humanity should be focusing on,” is longtermism the right circle to draw?
Christian Tarsney: Or is it maybe the case that there’s a couple of things that we can productively do to improve the far future, for instance reduce existential risks, and maybe we can try to improve institutional decision making in certain ways, and other ways of improving the far future, well, either there’s just not that much we can do or all we can do is try to make the present better in intuitive ways. Produce more fair, just, equal societies and hope that they make better decisions in future.
Robert Wiblin: Improve education.
Christian Tarsney: Yeah, exactly. Where the more useful thing to say is not we should be optimizing the far future, but this more specific thing, okay we should be trying to minimize existential risks and improve the quality of decision making in national and global political institutions, or something like that.
The value of the future
Christian Tarsney: There is this kind of outside view perspective that says if we want to form rational expectations about the value of the future, we should just think about the value of the present and look for trend lines over time. And then you might look at, for instance, the Steven Pinker stuff about declines in violence, or look at trends in global happiness. But you might also think about things like factory farming, and reach the conclusion that actually, even though human beings have been getting both more numerous and better off over time, the net effect of human civilization has been getting worse and worse and worse, as we farm more and more chickens or something like that.
Christian Tarsney: I’ll say, for my part, I’m a little bit skeptical about how much we can learn from this, because we should expect the outside view, extrapolative reasoning makes sense when you expect to remain in roughly the same regime for the time frame that you’re interested in. But I think there’s all sorts of reasons why we shouldn’t expect that. For instance, there’s the problem of converting wealth into happiness that we just haven’t really mastered, because, well, maybe we don’t have good enough drugs or something like that. We know how to convert humanity’s wealth and resources into cars. But we don’t know how to make people happy that they own a car, or as happy as they should be, or something like that.
Christian Tarsney: But that’s in principle a solvable problem. Maybe it’s just getting the right drugs, or the right kinds of psychotherapy, or something like that. And in the long term it seems very probable to me that we’ll eventually solve that problem. And then there’s other kinds of cases where the outside view reasoning just looks kind of clearly like it’s pointing you in the wrong direction. For instance, maybe the net value of human civilization has been trending really positively. Humanity has been a big win for the world just because we’re destroying so much habitat that we’re crowding out wild animals who would otherwise be living lives of horrible suffering. But obviously that trendline is bounded. We can’t create negative amounts of wilderness. And so if that’s the thing that’s driving the trendline, you don’t want to extrapolate that out to the year 1 billion or something and say, “Well, things will be awesome in 1 billion years.”
Externalism, internalism, and moral uncertainty
Christian Tarsney: Yeah, so unfortunately, internalism and externalism mean about 75 different things in philosophy. This particular internalism and externalism distinction was coined by a philosopher named Brian Weatherson. The way that he conceives the distinction, or maybe my paraphrase of the way he conceives the distinction, is basically an internalist is someone who says normative principles, ethical principles, for instance, only kind of have normative authority over you to the extent that you believe them. Maybe there’s an ethical truth out there, but if you justifiably believe some other ethical theory, some false ethical theory, well, of course the thing for you to do is go with your normative beliefs. Do the thing that you believe to be right.
Christian Tarsney: Whereas externalists think at least some normative principles, maybe all normative principles, have their authority unconditionally. It doesn’t depend on your beliefs. For instance, take the trolley problem. Should I kill one innocent person to save five innocent people? The internalist says suppose the right answer is you should kill the one to save the five, but you’ve just read a lot of Kant and Foot and Thompson and so forth and you become very convinced maybe in this particular variant of the trolley problem at least, that the right thing to do is to not kill the one, and to let the five die. Well, clearly there is some sense in which you should do the thing that you believe to be right. Because what other guide could you have, other than your own beliefs? Versus the externalist says well, if the right thing to do is kill the one and save the five, then that’s the right thing to do, what else is there to say about it?
Robert Wiblin: Yeah. Can you tie back what those different views might imply about how you would resolve the issue of moral uncertainty?
Christian Tarsney: The externalist, at least the most extreme externalist, basically says that there is no issue of moral uncertainty. What you ought to do is the thing that the true moral theory tells you to do. And it doesn’t matter if you don’t believe the true moral theory, or you’re uncertain about it. And the internalist of course is the one who says well no, if you’re uncertain, you have to account for that uncertainty somehow. And the most extreme internalist is someone who says that whenever you’re uncertain between two normative principles, you need to go looking for some higher-order normative principle that tells you how to handle that uncertainty.
Articles, books, and other media discussed in the show
Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and whether or not the past actually exists. I’m Rob Wiblin, Head of Research at 80,000 Hours.
The Global Priorities Institute at Oxford University has led to some of our most popular episodes in the past, thanks to Hilary Greaves and Will MacAskill — and in this episode we’re back for more fundamental thinking about what matters most with their colleague Christian Tarsney.
I was slightly worried this episode would be a bit too technical, but Christian turned out to be a great communicator who was able to zero in on the parts of his papers that really matter to those of us trying to make the world a better place.
Most importantly, I think this episode may contain a real solution to the problem of fanaticism and Pascal’s mugging cases, which have in recent years been used to challenge the merit of using expected value to make decisions in high-stakes situations.
I came into this interview not really understanding Christian’s research, but left able to explain it to my housemates, which counts as serious progress in my mind.
As always, we’ve got links to learn much more on the page associated with this episode, as well as a transcript and summary of key points. If your podcasting software allows it, we also support chapters so you can skip to whichever part of the conversation interests you most.
Robert Wiblin: Today, I’m speaking with Christian Tarsney. Christian is a philosopher at Oxford University’s Global Priorities Institute where he works with previous 80,000 Hours podcast guests Hilary Greaves and Will MacAskill. He did his PhD at the University of Maryland on how to make rational decisions when you’re uncertain about fundamental ethical principles, and his research interests include ethics and decision theory, as well as effective altruism and political philosophy. He’s published papers on — among many other things — the use of discount rates for climate policy and our attitudes towards past and future experiences. Fun stuff. Thanks for coming on the podcast, Christian.
Christian Tarsney: Thanks, Rob. Great to be here.
Robert Wiblin: I hope to get to talk about moral fanaticism and epistemic challenges that people have made to longtermism. But first, what are you working on at the moment and why do you think it’s important?
Christian Tarsney: So broadly, I’m a researcher in philosophy at the Global Priorities Institute, and we are trying to build a field of global priorities research, which means thinking about how altruistically motivated agents should use their resources to do the most good — and more specifically, what causes or problems they should focus on. At the moment we’re focused on building that field in philosophy and economics and trying to recruit the tools of those disciplines to answer questions that we think are really important. We think this is important because if we can come up with better answers to these questions, then hopefully that’ll influence what people actually do when they’re deciding where to allocate their resources.
Christian Tarsney: I think as a philosopher, you always have this background worry, are we actually improving our understanding of anything or are we just spinning our wheels? But optimistically, I think we’ve made some progress and are continuing to make progress on the low-hanging fruit because not a lot of people have thought really explicitly about this question of how to use resources to do the most good and how to prioritize among the many things that seem important and pressing. More specifically, my own research interests at the moment… I have a few things on my plate, but the things that are really gripping me, number one are epistemic issues to do with predicting and predictably influencing the far future. So insofar as at least one of the most important things we want to do with our resources is make the world a better place in the very long term, we want to be able to predict the long-term effects of our actions.
Christian Tarsney: And we just have very little empirical information on our ability to predict or predictably influence the future on the scale of centuries or millennia. It’s hard to see how we could have that data. And so we have to do some a priori speculating or modeling to try to figure out how we can do this well. And then the second related question that I’m interested in is, well, suppose it turns out that we have a limited ability to predict the far future, but we have enough that in expectation the far future really matters, so we can make a big difference to the expected value of the far future. But most of that expected value comes from tiny probabilities of having enormous, really persistent effects. Should we just naively maximize expected value in those situations? Or are there some other decision rules that apply when we’re dealing with those extreme probabilities? So those are two problems that seem pressing from the standpoint of cause prioritization, and are also neglected and hopefully tractable with the tools of philosophy and economics.
Robert Wiblin: Beautiful. Alright. Yeah. We’ll return to all of these issues that you raised through the course of the conversation, and also check in on how the field of global priorities research is going later on, but let’s waste no time getting into an interesting philosophical issue that you’ve looked into into the past, which is called future bias. You’ve got two papers out on this topic, called Thank goodness that’s Newcomb: The practical relevance of the temporal value asymmetry and Future bias in action: Does the past matter more when you can affect it? First off, what is future bias, for people who are not familiar with it?
Christian Tarsney: Broadly, future bias or the bias towards the future or the temporal value asymmetry is this phenomenon that people seem to care more about their future experiences than their past experiences. And that means, among other things, that you’d prefer — all else being equal — to have a pleasant or positive experience in the future, rather than the past. And you’d prefer to have a painful or a negative experience in the past, rather than the future. So there’s a number of cases or thought experiments that illustrate this, but a famous one from Derek Parfit goes like this: Imagine that you’re going to the hospital for an operation. And the operation requires you to be conscious and it will be very painful, but they’ll give you a drug afterwards to temporarily forget about it. So when you wake up after the operation, you won’t immediately remember that it’s happened. And so you wake up in the hospital and you can’t remember whether you’ve had the operation. And you call the nurse and the nurse comes over and you say, “Have I had my operation yet?”
Christian Tarsney: And they look at the foot of your bed, where there are two different charts for two patients. And they say, “Well, you’re one of these two, I don’t know which one is you. One of these patients had a three-hour operation yesterday and it was very long and painful and difficult, but it was a complete success. And that patient will be fine going forward. The other patient is due to have a one-hour operation later today, which will be much less painful and also expected to turn out well and so forth.” And the question is which patient would you rather be? And most people have the intuition that you would rather be the patient who had the three-hour operation yesterday rather than the one-hour operation later today, because then the pain is in the past.
Robert Wiblin: Yeah.
Christian Tarsney: So what’s odd about this is of course, normally we prefer less pain rather than more pain. In this case, we prefer more pain just because the pain would be in the past rather than the future.
Robert Wiblin: Yeah. So that feels very intuitive. I think to most people that they’d rather have had bad experiences in the past than have bad experiences coming up. What’s problematic about it? Is there some tension between that and maybe like other beliefs or commitments that we have?
Christian Tarsney: Yeah. So a few arguments potentially can be made for the irrationality of future bias. One is just that the burden of proof is on the person who wants to defend or justify future bias to explain what’s the relevant difference between the past and the future such that we should care more about the one than the other. And it turns out that this is just surprisingly difficult to do. So you can contest that the burden of proof actually goes that way. But for instance, there’s this famous argument from Parfit called future Tuesday indifference. He says, “Look, just imagine someone who is normal in every respect, except that they don’t care about what happens to them on future Tuesdays. So if they can have a one-hour operation next Monday or a three-hour operation next Tuesday, they’ll opt for the three-hour operation just because it’s on a Tuesday.”
Christian Tarsney: And we clearly think there’s something normatively defective about that person. I think many of us would be inclined to say they’re irrational just because something’s on a future Tuesday. Why is that a reason to care about it less? So similarly, just because an event is in the past, why should we care about it less?
Robert Wiblin: Okay. I guess I feel like it seems very natural that humans would have this intuition or that we would have kind of evolved or learned this intuition because our past experiences having already happened and not really being changeable and not going to happen again, it seems like you can’t really have any causal effect on them. So to some extent it’s kind of water under the bridge and it makes practical sense to ignore the past? Or I mean, maybe learn from the past, but to ignore things that happened in the past because they’re not going to be able to affect them in the same way that they can affect something else that might happen in future. Is that a good enough reason not to worry about them? Or maybe is it that it’s a good reason to not worry too much about the future, but inasmuch as in these hypothetical odd scenarios that we paint where you can, in some sense, have an effect on the past, that those are the cases where you should worry about your intuition is getting polluted by this, like by the normal thing where the past is unaffectable?
Christian Tarsney: Yeah. So I think a lot of people do take the view that our inability to affect the past has something centrally to do with our indifference toward past experiences. And actually in this paper Future bias in action recently published by myself and some collaborators at the University of Sydney, we tried to test this experimentally. And we found that in fact, when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about their past experiences more, which suggests that your inability to affect the past is one reason why you feel indifferent to it.
Christian Tarsney: But at the same time, if we’re asking the normative question of should we be indifferent to the past, then there are various reasons to think that our inability to affect the past is not a reason to judge that our past experiences don’t matter as much as our future experiences. So for instance, if that were true, then you should similarly be indifferent to inevitable future experiences. If you know for sure that something bad is going to happen to you tomorrow, you shouldn’t care about it. And in fact, we don’t have that kind of attitude. So that seems like at least a kind of inconsistency.
Robert Wiblin: Yeah. If I recall from that experiment that you did, the unaffectability of the past explained part of people’s different reactions.
Christian Tarsney: Yeah.
Robert Wiblin: But then when you got rid of the, or you tried to equalize the unaffectability, then there was still some future bias present.
Christian Tarsney: Yeah, that’s right. So what we ended up concluding in that paper is there are probably multiple explanations for future bias. The other explanation that people have prominently proposed is that we care more about the future because we have the intuitive belief that we’re moving through time. In some sense, that’s hard to explicate, but we have this intuition that we’re moving away from the past and towards the future, and that your future experiences are ahead of you rather than behind you, and that makes it rational to care more about the future than the past.
Robert Wiblin: So it’s like time is kind of playing a videotape, and the things that haven’t played yet are still coming up. And so you can still experience that pain, whereas the stuff in the past is somehow irrelevant or just wiped off of the ethical picture somehow.
Christian Tarsney: Yeah, that’s right. I mean, it turns out to be just very hard to explain, well, first of all, this idea of moving through time or time having a direction or a flow, and then second to explain why that should make it rational to care less about the past than the future in a way that doesn’t just become a roundabout way of saying, well, the past is in the past and the future is in the future, but a lot of people do see an intuitive connection here, including me.
Robert Wiblin: Yeah. Okay. It sounds like we might have to take a detour into the philosophy of time, or understand what different models people have of the nature of time and the present in order to dissect whether this idea makes any sense. You want to give an intro to that?
Christian Tarsney: Sure. So the central debate in the philosophy of time over the last 100 years or so is whether this idea of time moving or flowing or us moving from the past towards the future corresponds to any objective feature of reality. And this is a debate that’s also playing out, for instance, in physics. It’s something that our best physical theories maybe give us some indications one way or another, but don’t seem to settle, and you have physicists as well as philosophers on either side of this debate. And various arguments have been proposed either way, but well, the debate is still very much unsettled. And it’s also a little bit unclear exactly what the debate is about.
Christian Tarsney: So one thing, for instance, that people seem to disagree about, is the present moment, the ‘now.’ Is there one moment in time that’s objectively now, and that moves from earlier times towards later times? Or is it just that, for instance, the current time slice of me happens to be located at this location in time, and when I say ‘now,’ well ‘now’ just works like ‘here’ as a way of indicating the place in time where I happen to be located. So that’s one aspect of this debate that people try to get a handle on.
Robert Wiblin: Right. I don’t know that much about the philosophy of time, but I think my understanding is that there are three big theories that people put forward with different levels of plausibility. One is I think presentism, which you were describing, which is like, only the present instance is ‘actual,’ I think is the term that we use. I guess I’m not entirely sure what ‘actual’ means in this context, maybe that’s probably what people debate a lot. People are like, only the present instant is actual. Then you’ve got the ‘growing block’ theory of time, where all of the past exists or is actual because that has kind of been locked-in, because it’s already happened. And I guess the present instant exists as well, and that instant is just constantly being added to this recording of time that gets locked in. But in that one, the future isn’t yet actual.
Robert Wiblin: And then I guess you have eternalism, which is the idea that the past, the present, and the future are all actual to the same degree. It’s just that we happen to be like… My personal self happens to be passing through this instant, but all of them exist in some sense. And I guess on that view that there would be symmetry between things that happened in the past and things that happened in the future and how ethically weighted they are.
Christian Tarsney: Yeah, that’s basically right. But there are two separate debates here that are worth teasing apart. So one is about what philosophers called the ontology of time, so what moments in time or parts of time exist. And that’s the debate that you were describing. And if you’re a presentist or a growing block theorist, then you’re basically committed to the passage of time and the movement from the past to the future being in some sense objectively real. But if you take this other view, eternalism, you think the past, the present, and the future are all equally real. That doesn’t necessarily commit you one way or another on this debate about the passage of time. So you can still believe that the past or the future are real, but the present is still uniquely and objectively present. It has some special status. So there’s what people call the ‘moving spotlight’ theory, which says there is this eternal block of time, past, present, future events, all existing. But one moment in the block is illuminated at any given moment. And that’s the present.
Robert Wiblin: I see, interesting. I guess on the growing block model, where what actually exists in this ontological sense is kind of increasing as time passes, that would seem to suggest in some way that maybe you care more about the past, right? Because the past is kind of actual and locked in. Whereas the future is this ethereal thing that hasn’t happened yet. I guess maybe you could say there’s a symmetry if the future will happen. So at some point it will matter, but inasmuch as it’s uncertain, the past matters potentially even more.
Christian Tarsney: Yeah. This is something that philosophers have remarked on repeatedly, and one thing that people often say is kind of surprising, that nobody defends ‘shrinking block’ theory, that says the present and the future are real and the past isn’t. That would be a really neat explanation for why the future matters more than the past. But interestingly, we have on the one hand this very strong intuition that the future matters more than the past. And on the other hand, many people have the intuition that the past is real in a way that the future isn’t.
Robert Wiblin: So what kind of resolutions have people proposed to this? And how do they interact with people’s broader philosophical attempts to make sense of the nature of time?
Christian Tarsney: Yeah, well, so there’s an ongoing debate — as there usually is in philosophy — about whether the bias towards the future is rational or irrational. And maybe at a finer level of grain, whether it’s rationally required to care more about the future or rationally required to be neutral between different times, or you’re just rationally permitted to do whatever you want. And the latest set of moves in this debate have involved pointing out various ways in which whether you care about the past or not can affect your choices. So the obvious boring case is, well, what if there’s backward time travel? And you could actually retro-causally affect your past experiences? But there are other interesting cases. So for instance, if you are risk averse, then whether you’re biased towards the future or not can make a difference to your choices. Because whether one option is riskier or less risky than another can depend on whether you’re counting the stuff in the past that’s already baked in — and it might, for instance, be correlated in certain ways with what’s going to happen in the future.
Robert Wiblin: Another approach that one might take to this would be to reject what you were saying earlier, that the burden of proof is on the person who says that they care more about the future. And you might say, well, maybe this is just like, rather than being something that seems more irrational, like the future Tuesday case, where you just, for some reason that you can’t explain, don’t care about Tuesdays, this is more like a taste thing. Where it’s like, I like apples, but I don’t like oranges. We don’t think that you have a special burden of proof there. It’s more just a matter of taste, and a matter of personal preference. Is it plausible to run that line of argument? That it’s just like, personally, I just care about the future, and I don’t care about the past, and that’s just how I am and I don’t have to justify myself?
Christian Tarsney: I think that’s plausible. There are a couple arguments you could mount against it. So one question or complication is whether the bias towards the future also affects your other-regarding or altruistic preferences. So this is something people seem to have different intuitions about. Some people think that the bias towards the future is exclusively first personal. So when I’m thinking about other people’s experiences, people I care about, I don’t particularly care whether their pain is in the past or the future. You can manipulate people’s intuitions about this. So if you think about someone far away on the other side of the world, maybe it doesn’t seem to matter that much, whether their pain happened yesterday or tomorrow. But if it’s, say, your partner who you live with, you’ll feel better if they’ve already had their painful operation yesterday rather than today.
Christian Tarsney: And of course, if you are biased towards the future, at least in some sort of other-regarding altruistic cases, then it seems like there’s a kind of higher burden of justification. It can’t just be your personal preference that their pains be in the past rather than the future. There’s also the set of ways in which the bias towards the future might affect your choices. So for instance, if you’re biased towards the future and risk averse in a particular way, you can be money pumped. So you can make choices that will result in you being definitely worse off than you otherwise might’ve been. And you might think any pattern of preferences that allows you to be money pumped is ipso facto irrational, and not just a matter of taste.
Robert Wiblin: Yeah. Can you explain this concept of money pumping? It shows up a lot in this discussion of ethics and decisions theory and rationality and so on, but I think probably not everyone has heard the idea.
Christian Tarsney: Yeah. So a money pump basically is a sequence of choices where an agent with particular dispositions will choose a series of options that leave them definitely worse off than some other series of options they might have chosen would have. So the classic example is if you have cyclic preferences. If I have apples and oranges and bananas, and I prefer an apple to an orange and an orange to a banana, and a banana to an apple, then, well, you can say, “I have an apple,” and you can say, “Well, I’ll trade you your apple for a banana if you pay me one cent.” And I take that deal because I prefer bananas. And then you say, “Well, I’ll give you an orange in exchange for that banana, if you give me one cent.” And similarly then I can get you to trade back for the apple, and you’ve gotten three cents out of me, and I’m just stuck with the apple that I had in the first place. So all sorts of patterns of preference can give rise to these sequences of choices that leave you definitely worse off.
Robert Wiblin: Yeah. Sometimes people would defend that it’s acceptable in some way to hold a position where you can be money pumped. Often in philosophy you face unpleasant trade-offs, you have to choose a position that has one weakness, or a position that has another weakness. And this is one of the weaknesses that a view might have, is that it’s vulnerable to money pumping. And it’s an undesirable property, but not necessarily a completely decisive one if every other option also has some unpleasant side effects.
Christian Tarsney: Yeah. I think that’s right. There’s plenty of debate about how decisive money pumps should be. I think one distinction that’s worth making is between what are sometimes called ‘forcing’ versus ‘non-forcing’ money pumps. So something like having incomplete preferences. If I prefer apples to bananas, but oranges are just incomparable to both, like I have no preference between apples or bananas and oranges, then it seems naively like it’s rationally permissible for me to make a series of choices that’ll leave me worse off, but it’s also rationally permissible for me to not do that. And you can say, well, there’s just an extra rule of rationality that says I shouldn’t do the sequence of things that will constitute a money pump. But in other cases, like the transitivity case, your preferences seem to commit you or force you to do the thing that leaves you definitely worse off. And it seems at least intuitively compelling that having preferences that force you or commit you to make yourself definitely worse off, that that’s at least a significant theoretical cost.
Robert Wiblin: Yeah. There’s something more seriously problematic there.
Robert Wiblin: Okay. So we’ve discussed a couple of different approaches that people might take to resolve this issue, or a couple of different positions that people might take. How do people respond to a time travel case where you imagine a world where time travel is possible? You can go back into the past and change how things went, and then make people experience less suffering in the past. Does that tend to make a big difference to people’s attitudes, to how important the past is to them?
Christian Tarsney: So this is what we investigated in this paper Future bias in action and we found that it does, to some extent. So it doesn’t in aggregate make people perfectly time neutral, people still on average care more about the future than the past, but the asymmetry becomes weaker when you consider backward time travel cases.
Robert Wiblin: Yeah. Interesting. I guess it’s a bit hard to know how to concretize the time travel case, because you imagine like, okay, so you can go back in time and then run things again and have them go better. But then I’m like, does that mean it’s happened twice? Does it now get double value? Or am I erasing the original run-through and causing it not to have had any more or consequences? It almost raises as many questions as it answers.
Christian Tarsney: Yeah. Your theory of time travel definitely makes a difference here. You might think, well, if you think of backward time travel in a way where events, say, happened the first time around in the past, but then you can go back and erase them, there’s this additional question: Do the events that you erased still matter, or are they no longer part of the timeline? I think it’s fair to say that most philosophers are inclined to think that with time travel — insofar as it’s metaphysically possible — there has to be one consistent timeline. And so anything that you do if you go back into the past was already part of the past, but you might have limited information.
Christian Tarsney: So the case that we described in our experiment, for instance, you know that you were tortured for some period of time in the past, but you don’t remember exactly how long you were tortured or how many times you were subjected to an electric shock. And you have the opportunity to affect that retro-causally to determine whether you had 1,000 shocks or 1,010 shocks, or something like that. But you know that you’re not erasing the past, you’re just influencing what the past already was.
Robert Wiblin: Philosophers think that time travel, or I guess physicists think that time travel is kind of conceptually possible, or like, I guess I should say retro-causality is possible, but you need to have a self-consistent loop—
Christian Tarsney: Mm-hmm.
Robert Wiblin: —where the past affects the present which causes the present to cause the past. And then you’ve got a consistent series of causes that all fit together like puzzle pieces. I don’t know whether you want to explain the philosophy of time travel, but is that right?
Christian Tarsney: Yeah. I’m not particularly… I’m venturing a little bit outside my area of expertise, but general relativity has solutions that involve backwards time travel, where you have what are called closed timelike curves moving into their own past. But yeah, those solutions all involve one self-consistent timeline rather than, for instance, branching timelines, or erasing events that originally happened in the past or anything like that.
Robert Wiblin: Yeah. I think this comes up in not just philosophy, because there’s like some theories within physics of like at the subatomic level, you could end up with retro-causal stuff, and then you want to figure out well, is that self-consistent in a way? Or is that going to violate some other fundamental principle of physics?
Robert Wiblin: Okay. Coming back to future bias though, let’s talk about the interaction between future bias and decision theory, which is something that you looked into. First off, for people who aren’t familiar, what is decision theory, in brief? If it’s possible to do this one in brief.
Christian Tarsney: Sure. So decision theory is the theory of how people either do or should make decisions. So descriptive decision theory studies how people do make decisions, normative decision theory studies how they should make decisions. There are a number of questions that decision theorists ask. So there’s no one question that centrally characterizes the discipline. One major question is how we respond to risk or uncertainty. So for instance, should we maximize expected value or expected utility, or are we allowed to be risk averse in ways that violate the axioms of expected utility theory? There’s also this famous debate between evidential decision theorists and causal decision theorists about how to act in cases where your choices give you some information about the pre-existing state of the world.
Robert Wiblin: Yeah. Is there a simple thought experiment that kind of elucidates the difference between evidential and causal decision theory?
Christian Tarsney: Yeah. So the classic case is called Newcomb’s problem. The idea is that there is a predictor who’s just very good at analyzing human motivations and predicting human choices. And the predictor presents you with the following choice: There are two boxes in front of you. One of them is transparent, and you can see it contains $1,000. The other box is opaque. And what the predictor tells you is that your options are either to take just the opaque box and get whatever’s inside there, or to take the opaque box and the transparent box together. But if I predicted that you would take both boxes, then I left the opaque box empty. And if I predicted that you would take only the opaque box, I put $1 million inside. So evidential decision theorists say, well, if the predictor is really that great, either they’re infallible at predicting my choices or they’re just very, very good, then if I take the opaque box that tells me that the predictor certainly or almost certainly predicted that I would do that, and put $1 million inside. So I end up with $1 million. Whereas if I take both boxes, then I’ll only end up with $1,000, because the predictor won’t have put the $1,000 inside.
Christian Tarsney: Whereas a causal decision theorist says, okay, but your choice makes no difference causally to whether there’s $1 million in the opaque box or not. There either is or there isn’t. And in either case, taking both boxes leaves you $1,000 richer than you would have been had you taken only the opaque box. So the rational thing to do is take both boxes.
Robert Wiblin: Yeah. I think a thought experiment that feels more intuitive and a bit less scifi to me is I think the smokers’ lesion problem, where, so we find out that like a large part of the reason why smokers tend to die young isn’t just that they’re smoking, it’s that there’s some correlation say genetically between people who are predisposed to enjoy smoking and have a compulsion to smoke and people who happen to have a genetic predisposition for having brain lesions that then can kill them later in life. And so in that case, you got this question, if you smoke, or if you find that you enjoy smoking and want to smoke and decide to smoke, that gives you evidence that you’re more likely to have this deadly brain lesion disease for some genetic correlation reason.
Robert Wiblin: But then should you take that into account in your decision on whether to smoke, it lowers your life expectancy, but not kind of causally through smoking. It’s just because smoking gives you evidence about something else about yourself. And it’s kind of a bit of a puzzle. Smoking lowers your life expectancy more than it does causally, and should you therefore use that? And that one is more intuitive because it doesn’t require anything that’s like really outside of what we’re used to experiencing.
Christian Tarsney: Yeah. The smoking lesion case, that’s the classic counterexample to evidential decision theory, because, well, I always find it hard to remember what the right intuition is supposed to be here, but most people intuit that the rational thing to do is to smoke, because it doesn’t cause cancer. It just gives you information that you’re more likely to have cancer. But there are some complications about the case that make it possible for evidential decision theorists to try to start to explain it in a way.
Robert Wiblin: I’ve got some links to decision theory stuff for people who are interested. What’s the interaction between future bias and decision theory that you’ve looked into?
Christian Tarsney: Well. So the particular connection that I’ve explored in this paper Thank goodness that’s Newcomb is that if you’re an evidential decision theorist, then whether you do or don’t care about the past can affect your choices in ways that don’t require exotic backwards time travel or retro causation or anything like that. So I imagined basically a variant of the Newcomb case where the predictor kidnaps you and subjects you to electric shocks for a period of time. And then they give you the option at the end to shock yourself one more time before you’re released, but they made a prediction in advance about whether you would choose to give yourself that final shock. And if they predicted that you would, they shocked you fewer times over the last week that they were holding you and torturing you. And if they predicted that you wouldn’t, they shocked you more times.
Christian Tarsney: And of course, if you’re an evidential decision theorist and you’re time-a neutral, you want to minimize the total number of shocks that you’ve ever experienced. And so you’ll choose to shock yourself now. But if you’re either a causal decision theorist or you’re biased towards the future, then you would not choose to shock yourself.
Robert Wiblin: Okay. What should we make of that?
Christian Tarsney: Well, one thing you could make of it is that this is one more case where evidential decision theory tells us something silly. And so we should be causal decision theorists. Of course, then you can rerun a similar case, which is basically what we did in this experimental paper, where you use backwards time travel rather than predictors to give people the option of affecting their own past experiences. And many of the same sort of philosophical issues come up.
Christian Tarsney: My take in the original paper was that our intuitions about the irrelevance or our indifference towards our past experiences don’t change very much when we’re considering these cases where we can ‘affect’ our past experiences, or our choices give us evidence about our past experiences. So my own take was that this undercuts the idea that the reason we don’t care about the past is because it’s practically irrelevant. But then this experimental paper that we did actually finds that people do change their intuitions or their judgements, at least on average in these cases. So my own philosophical take turned out to be undercut anyway, by our experimental results.
Robert Wiblin: Interesting. Yeah, I feel in that case, I have the intuition that you want to do the thing that reduces the total amount of electric shocks over all periods of time, which I guess is what you found other people felt at least to some degree. And I wonder whether there’s something that’s going on where it kind of depends on whether you’re thinking about it from the prudential selfish perspective of you at this instant in time, or whether you’re thinking about what would be a better world, all things considered. And it seems like what would be a better world all things considered is less torture in total. Maybe like what’s best for me right now is minimizing the amount of future torture that I’m going to experience. But then it seems like maybe we’re running up against a tension between our prudential perspective and then our ethical commitments. And this is creating a tension that we somehow have to resolve.
Christian Tarsney: Yeah, that could be. So if you think that we are generally time neutral when we’re thinking about other people, and then in this case, you can put on your impartial altruist hat when thinking about your past self and just treat them as another person that you’re concerned with, then maybe that is one reason why you would be more inclined to accept additional future pain to avoid a greater amount of past pain. But it is, as I mentioned earlier, non-obvious whether we’re generally time neutral when we’re thinking about other people. So one view you could take that’s not completely counterintuitive is well, the past as a whole is just dead and gone, not just my experiences, but other people’s experiences. And so what we should be thinking about as altruists is not making the world as a whole across all the time and space a better place, but making the future better, because that’s what’s still out there to be experienced.
Robert Wiblin: Okay. I want to push on from this in just a second, but it sounded like earlier you were saying that the growing block theory and presentism and eternalism are kind of all still philosophically acceptable, and there are advocates for all of them in philosophy and physics. I kind of understood that there were some thought experiments that had made eternalism, the idea that the past, the present, and the future are all actual in some sense, to be a more dominant view, at least among physicists anyway. Have I misunderstood that?
Christian Tarsney: I think you’re probably right, that it’s more dominant among physicists, and probably even more dominant among philosophers, although all of these views still have active defenders. Maybe the most powerful argument that has convinced a lot of people is just that a naive picture of time where there’s an objective present moment in moving from the past towards the future requires that you be able to chop the universe into time slices in this objective way, where we can say all of these events, at all these different locations across the universe, those are the ones that are present right now.
Robert Wiblin: We’re simultaneous.
Christian Tarsney: Right. But special relativity teaches us that actually, whether two events at different locations are simultaneous with each other depends on basically how fast you’re moving. Right? So two people in motion relative to each other will disagree about which events are simultaneous. And so it looks — at least in relativistic physics — like there just couldn’t be a privileged plane of simultaneity, that all of those events are present and nothing else is.
Robert Wiblin: Yeah. I think that this shows up in ethics elsewhere when you’re thinking about the ethics of the future. Because you can end up with these funny cases where someone who cares less about the future, say, because you ask them like how much would you pay to prevent something terrible happening in 1,000 years? And they say, well, not very much, because it’s so far away in the future. Then you do something where it’s like, you send them away at almost light speed and then they can come back in what is to them only a few minutes or only a few hours, and then arrive in effect 1,000 years in the future in this other location. And then the terrible thing happens. And you’re like, what is the amount of time that’s passed? Because this all depends on the speed they were going at, and what you traveled. So if this is like only a few hours away from your perspective, does that mean that the 1,000-year thing doesn’t matter?
Christian Tarsney: Yeah.
Robert Wiblin: It introduces this peculiar kind of inconsistency.
Christian Tarsney: I think that’s one very good argument against what’s called ‘pure time preference.’ Thinking that the mere passage of time or mere distance in time has ethical significance.
Robert Wiblin: Alright. We’ll put up a link to those papers and people can explore more if they would like, I’m sure there’s plenty more in there. Is there anything that people should take away in their practical life and their decision making altruistically from these past, present, future ethical comparison cases?
Christian Tarsney: I think there’s two things that are worth mentioning. One is altruistically significant, which is, if you think that one of the things we should care about as altruists is whether people’s desires or preferences are satisfied or whether people’s goals are realized, then one important question is, do we care about the realization of people’s past goals, including the goals of past people, people who are dead now? And if so, that might have various kinds of ethical significance. For instance, I think if I recall correctly, Toby Ord in The Precipice makes this point that well, past people are engaged in this great human project of trying to build and preserve human civilization. And if we allowed ourselves to go extinct, we would be letting them down or failing to carry on their project. And whether you think that that consideration has normative significance might depend on whether you think the past as a whole has normative significance.
Robert Wiblin: Yeah. That adds another wrinkle that I guess you could think that the past matters, but perhaps if you only cared about experiences, say, then obviously people in the past can’t have different experiences because of things in the future, at least we think not.
Christian Tarsney: Yeah.
Robert Wiblin: So you have to think that the kind of fixed preference states that they had in their minds in the past, it’s still good to actualize those preferences in the future, even though it can’t affect their mind in the past.
Christian Tarsney: Yeah, that’s right. So you could think that we should be future biased only with respect to experiences, and not with respect to preference satisfaction. But then that’s a little bit hard to square if you think that the justification for future bias is this deep metaphysical feature of time. If the past is dead and gone, well, why should that affect the importance of experiences but not preferences? Another reason why the bias towards the future might be practically interesting or significant to people less from an altruistic standpoint than from a personal or individual standpoint, is this connection with our attitudes towards death, which is maybe the original context in which philosophers thought about the bias towards the future. So there’s this famous argument that goes back to Epicurus and Lucretius that says, look, the natural reason that people give for fearing death is that death marks a foundry of your life, and after you’re dead, you don’t get to have any more experiences, and that’s bad.
Christian Tarsney: But you could say exactly the same thing about birth, right? So before you were born, you didn’t have any experiences. And well, on the one hand, if you know that you’re going to die in five years, you might be very upset about that, but if you’re five years old and you know that five years ago you didn’t exist, people don’t tend to be very upset about that. And if you think that the past and the future should be on a par, that there is no fundamental asymmetry between those two directions in time, one conclusion that people have argued for is maybe we should be sanguine about the future, including sanguine about our own mortality, in the same way that we’re sanguine about the past and sanguine about the fact that we haven’t existed forever. Which I’m not sure if I can get myself into the headspace of really internalizing that attitude. But I think it’s a reasonably compelling argument and something that maybe some people can do better than I can.
Robert Wiblin: I feel like that’s easy to resolve because I’m just like, yeah, it’s terrible that I didn’t used to exist. It’s terrible that I was born as late as I was. I should have been born 1 billion years earlier and lived through the entire length of it, but there’s not much I can do about that. I can go to the gym and try to live longer, but I can’t go to the gym and try to be born earlier. So it’s kind of water under the bridge, yeah?
Christian Tarsney: Yeah. Right. That could be the conclusion you reach too.
Robert Wiblin: Alright. We’ll stick up links to those papers and people can dig in if they’d like to learn more.
Robert Wiblin: Let’s move on and talk about a problem in moral philosophy known as fanaticism. Yeah, what is the problem of fanaticism for those who are not familiar?
Christian Tarsney: Roughly the problem is that if you are an expected value maximizer, which means that when you’re making choices you just evaluate an option by taking all the possible outcomes and you assign them numeric values, the quantity of value or goodness that would be realized in this outcome, and then you just take a probability-weighted sum, the probability times the value for each of the possible outcomes, and add those all up and that tells you how good the option is…
Christian Tarsney: Well, if you make decisions like that, then you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome, or you can prefer certainty of a bad outcome over an option that gives you near certainty of a very good outcome, but just a tiny, tiny, tiny probability of an astronomically bad outcome. And a lot of people find this counterintuitive.
Robert Wiblin: So the basic thing is that very unlikely outcomes that are massive in their magnitude that would be much more important than the other outcomes in some sense end up dominating the entire expected value calculation and dominating your decision even though they’re incredibly improbable and that just feels intuitively wrong and unappealing.
Christian Tarsney: Well, here’s an example that I find drives home the intuition. So suppose that you have the opportunity to really control the fate of the universe. You have two options, you have a safe option that will ensure that the universe contains, over its whole history, 1 trillion happy people with very good lives, or you have the option to take a gamble. And the way the gamble works is almost certainly the outcome will be very bad. So there’ll be 1 trillion unhappy people, or 1 trillion people with say hellish suffering, but there’s some teeny, teeny, tiny probability, say one in a googol, 10 to the 100, that you get a blank check where you can just produce any finite number of happy people you want. Just fill in a number.
Christian Tarsney: And if you’re trying to maximize the expected quantity of happiness or the expected number of happy people in the world, of course you want to do that second thing. But there is, in addition to just the counterintuitiveness of it, there’s a thought like, well, what we care about is the actual outcome of our choices, not the expectation. And if you take the risky option and the thing that’s almost certainly going to happen happens, which is you get a very terrible outcome, the fact that it was good in expectation doesn’t give you any consolation, or doesn’t seem to retrospectively justify your choice at all.
Robert Wiblin: Yeah. I think this can show up in other ways as well. One that jumps to mind is the dominant view among people who study this kind of thing is that insects probably aren’t conscious, and if they are conscious, they’re probably not very conscious. But we’re not super sure about that, so maybe there’s a 1 in 1,000 chance that insects are conscious to a significant degree. And there’s so many insects, it’s just phenomenal how many insects there are relative to how many humans, it’s a very, very large multiple. A fanatical position might be someone who says, well, I’m just going to maximize expected value, and I think there’s a 1 in 1,000 chance that insects are conscious to an important degree, and so I’m going to focus all my attention on trying to improve the wellbeing of insects. So this is one that doesn’t involve the time as much, but involves a change of focus based on a longshot possibility that something really matters even though it probably doesn’t.
Christian Tarsney: I think in that case too it seems counterintuitive to throw away, for instance, the opportunity for a very good outcome for this very tiny probability of a much better outcome. But then I think the other important thing — and maybe something that people under appreciate — is just that there isn’t any great, at least any widely accepted positive argument for the kind of risk-neutral expected value maximization that leads you to fanaticism. And in fact, the standard expectational theory of decision making under risk doesn’t force you to be fanatical in that way.
Robert Wiblin: Okay, interesting. Maybe let’s first lay out what is the case in favor of having a fanatical style of decision making where you’re just going to let that tail wag the dog?
Christian Tarsney: There’s a few arguments you could make. One route is just to defend risk-neutral expected value maximization. What that means is you have some way of measuring value that’s independent of your preferences towards risk. So for instance just to simplify, I care about the number of happy people and the number of unhappy people that ever exist, and so the value of an outcome is just say number of happy people minus number of unhappy people. And you might just think, well, the intuitive response to risk is to value outcomes in proportion to quantitatively how good they are, and multiplying that by probability and risk-neutral expected value maximization just feels right.
Christian Tarsney: There’s also more theoretical arguments you can give. So for instance, Harsanyi’s aggregation theorem gets you something like at least risk neutrality in the number of people who you can benefit to a given degree. But it requires you to accept some controversial premises like the ex-ante Pareto principle. So if you assume that each individual is an expected utility maximizer and you say if some option gives greater expected utility for each individual, then we should prefer it. There are various reasons why you might reject that.
Robert Wiblin: The underlying principle there is that if someone’s better off and no one else is worse off, then it’s going to be better. And I guess Harsanyi tried to do a bit of mathematical alchemy to convert that into a view that you should maximize expected value, which is to say maximize the probability of each outcome by the value of that outcome and then add all those up and then maximize the total.
Christian Tarsney: So it’s a little complicated, for instance because the Harsanyi theorem allows individuals to be very risk averse, for instance, with respect to years of happy experience or whatever. But what it does say, roughly, is well, if I can say benefit N individuals to a certain degree with probability P or I can benefit M individuals to that same degree with probability Q then which thing I should do is just determined by multiplying the number of people times the probability.
Christian Tarsney: There’s another way of justifying fanaticism that doesn’t depend on a commitment to risk-neutral expected value maximization. And this is something that Nick Beckstead and Teruji Thomas have explored in a GPI working paper that’s based on a part of Nick Beckstead’s dissertation. Roughly the argument is, well, look, suppose that I can have some good outcome with probability P, or I can have a much better outcome with let’s say we multiply P by some factor, like 0.99 or something, so I reduce the probability of the good outcome by 1% but I can increase how good the outcome is by an arbitrarily large amount.
Christian Tarsney: There must be some amount by which you could increase the value of the outcome such that you’d be willing to accept a 1% decrease in this probability. And if you think for any probability in any magnitude of goodness or value, you’re willing to accept that 1% reduction in probability for a sufficiently large increase in the magnitude of the payoff, then you just iterate that enough times and ultimately you’re preferring a tiny probability of a ridiculously good payoff to certainty of even potentially a very good payoff. So that allows you to be for instance risk averse with respect to value, but nevertheless you end up being at least in theory in principle vulnerable to fanaticism.
Robert Wiblin: Let’s take the other side now. How big are the other worries about fanaticism? Or what are the downsides, and what ways might we work around it?
Christian Tarsney: Well, to me the biggest reason to not be blithely fanatical or blithely maximize expected value is just that the arguments for it are only moderately compelling. So a thing that many philosophers and probably many people in the EA community probably misunderstand about standard decision theory is that the standard widely accepted theory of decision making under risk, expected utility theory, what it tells you roughly is that you should make choices in a way that can be represented as assigning numerical values to outcomes, and then multiplying those values by probability and maximizing the expectation.
Christian Tarsney: But it doesn’t tell you anything about how you should go about assigning those numerical values to outcomes, and it doesn’t tell you, for instance, if you have the independently given ethical scale like I care about the number of happy lives, even assuming that you should rank outcomes according to how good they are — so more happy people is always better than fewer happy people — nevertheless, you combine that with say the Von Neumann-Morgenstern axioms which are one of the standard formulations of expected utility theory, and the conclusion you get is just that you should maximize the expectation of some increasing function of the number of happy lives.
Christian Tarsney: But that increasing function could be, for instance, bounded above. So that the more happy people already exist, the less you care at the margin about an additional happy person. So that’s to say standard orthodox decision theory doesn’t force you to be fanatical, and the arguments that do force you to be fanatical, there are various ways that you can get off the bus.
Robert Wiblin: Okay. So the idea here is that the basic principles in decision theory and expected value theory that we usually think, well, we’re going to have to work with these, they say that if more happy people is good, it’s true then that twice as many happy people is going to be better than X as many happy people. However, it doesn’t show that it has to be twice as good. And that means because you can get declining returns on these larger and larger benefits, that you’re potentially going to be less vulnerable to weighting the largest outcomes in scale, because you can tamp them down by saying, well, maybe twice as many happy people is only 1% better as 1X as many happy people.
Christian Tarsney: Yeah, exactly. You could, for instance, maximize the expectation of the logarithm of the number of happy people if you wanted to, but that would still be vulnerable to fanaticism because that’s unbounded. But yeah you can have functions like logs that are concave, but yeah, that have a horizontal asymptote.
Robert Wiblin: To me it seems really intuitive, the idea that if something is good then twice as much of it as twice as good. It’s a little bit surprising to find out that that wouldn’t be a reasonably fundamental principle of rational decision making. Is there a way of making it intuitive why that doesn’t spill out of these kinds of axioms?
Christian Tarsney: Well, there’s maybe two things to say. One is in general we don’t think that twice as much of a good thing is twice as good. So money is the obvious example here. Of course if you get $1 billion tomorrow that would be life changing. If you get an additional $1 billion the day after that, that would be nice but it wouldn’t double the impact of that first $1 billion. So then you need some separate argument for why say happy lives behave differently than money. And maybe it seems intuitive to you, maybe the point is that we value happy lives intrinsically while we only value money instrumentally, or something like that. But at least it’s not automatic or axiomatic that anything that matters that we have some way of measuring how much of it there is, that the value of it has to scale linearly with the amount of it.
Robert Wiblin: The normal story there is that money is instrumentally valuable so it’s just useful as a means to an end. And assuming that my end was happiness, then I can’t buy twice as much happiness with $2 billion as I could with $1 billion. Maybe I could barely make myself any happier whatsoever, and so of course the second $1 billion isn’t equally as good as the first. But then with the thing that you terminally value, the thing that is valuable in itself, like happiness, if I could get twice as much happiness, that feels more intuitive that that is twice as good.
Christian Tarsney: Yeah. One way that you could respond is to say, well, maybe to some extent we value, say, individual happiness, or the existence of happy lives not just intrinsically but also instrumentally. Or because it’s constitutive of some greater good, like we want there to be a flourishing human civilization or something that. Or we want the universe to contain life and sentience and happiness. And once there’s enough life and happiness and sentience to satiate that need for the universe to contain happiness, then we care about additional increases in individual happiness or the number of happy people less or something like that.
Christian Tarsney: But then the other thing to say is, grant your argument, for instance, that twice as many happy people is twice as good. Then there’s a further question. If I can have one outcome for sure or another outcome that’s twice as good with say 51% probability, should I prefer the twice as good outcome with 51% probability? Even conceding that it’s twice as good, it doesn’t automatically follow that I should just multiply that by the probability.
Robert Wiblin: So you’re saying you don’t necessarily have to do linear expected value maximization to be rational on this view?
Christian Tarsney: Yeah. Well, the thing that the standard axioms of expected utility theory tell you is suppose that you… Well, this isn’t part of expected utility theory, but suppose ethics gives us a ranking of outcomes, so more happy people or more happiness is better, and we stick that in exogenously. And then we also say you have to satisfy these axioms like independence and continuity and transitivity and so forth.
Christian Tarsney: Then the conclusion that spits out is that you need some utility function that’s increasing in the total amount of value or number of happy lives or whatever such that more happy people has greater utility, but that doesn’t mean that… That function doesn’t have to be linear, just nothing in the axioms forces that to be linear. So at least just an appeal to those axioms or appeal to the normative authority of expected utility theory doesn’t get you that jump from twice as good to we should weight it twice as much when we’re multiplying by probabilities.
Robert Wiblin: To what degree does this solve this problem of fanaticism? Should people think like, “Oh, well this has dealt with this issue to a pretty large extent”?
Christian Tarsney: Well, I definitely don’t think that the problem is resolved. So my own take on fanaticism and on decision making under risk, for whatever it’s worth, is fairly permissive. A weird and crazy view that I’m attracted to is that we’re only required to avoid choosing options that are what’s called first-order stochastically dominated, which means that you have two options, let’s call them option one and option two. And then there’s various possible outcomes that could result from either of those options. And for each of those outcomes, we ask what’s the probability if you choose option one or if you choose option two that you get not that outcome specifically, but an outcome that’s at least that good?
Christian Tarsney: Say option one for any possible outcome gives you a greater overall probability of an outcome at least that desirable, then that seems a pretty compelling reason to choose option one. To give maybe a simple example would be helpful. Suppose that I’m going to flip a fair coin, and I offer you a choice between two tickets. One ticket will pay $1 if the coin lands heads and nothing of it lands tails, the other ticket will pay $2 if the coin lands tails, but nothing if it lands heads. So you don’t have what’s called state-wise dominance here, because if the coin lands heads then the first ticket gives you a better outcome, $1 rather than $0. But you do have stochastic dominance because both tickets give you the same chance of at least $0, namely certainty, both tickets give you a 50% chance of at least $1, but the second ticket uniquely gives you a 50% chance of at least $2, and that seems a compelling argument for choosing it.
Robert Wiblin: I see. I guess, and in a continuous case rather than a binary one, you would have to say, well, the worst case is better in say scenario two rather than scenario one. And the one percentile case is better and the second percentile case, the median is better, or at least as good, then the best case scenario is also as good or better. And so across the whole distribution of outcomes from worst to best, with probability adding them up as percentiles, the second scenario is always equal or better. And so it would seem crazy to choose the option that is always equally as good or worse, no matter how lucky you get.
Christian Tarsney: Right. Even though there are states of the world where the stochastically dominant option will turn out worse, nevertheless the distribution of possible outcomes is better.
Robert Wiblin: Okay. So you’re saying if you compare the scenario where you get unlucky in scenario two versus lucky in scenario one, scenario one could end up better. But ex-ante, before you know whether you got lucky with the outcome or not, it was worse at every point.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Okay. And so your view is a fairly narrow one, that we only need to take options that are stochastically dominant.
Christian Tarsney: Or that are not stochastically dominated.
Robert Wiblin: How is that different?
Christian Tarsney: Suppose I have three options, one, two, and three. It could be that for instance one stochastically dominates two, but three neither dominates nor is dominated by anything. And then, yeah, three isn’t stochastically dominant of anything else, but the important thing is that it’s not stochastically dominated. So there’s no other option that you say, clearly this is better than three. And that means three is permissible.
Robert Wiblin: Seems like in practice, and the world being so messy, there’s so many different potential outcomes with different rankings from 0% to 100% of luckiness that it’s going to be rare to find options that are stochastically dominated, or at least that there’ll be a wide range of options that aren’t stochastically dominated and so this could in the real world end up being a very permissive theory of what it is to make a rational decision.
Christian Tarsney: Yeah, that’s absolutely right. When you asked earlier, well, should we just think that, for instance, this point about what expected utility theory tells us, that this settles the problem with fanaticism? One reason not to think that is in effect, what standard expected utility theory tells you is just this stochastic dominance thing. It constrains your choices under risk up to stochastic dominance, but no further. And as you say, that’s just very, very, very permissive. For instance, if I can save one life for sure or 100 lives with probability 0.99, both the ‘it’s just stochastic dominance’ view and the ‘it’s just axioms of expected utility theory’ view say you’re permitted to do either thing, but intuitively that save 100 lives with probability 0.99 looks the better option.
Robert Wiblin: Are there any arguments that we could make for fanaticism, or for the more linear maximize expected value view that might get us further than just saying you shouldn’t choose something that’s stochastically dominant? And that might be a bit closer to common sense in this kind of thing where you’re saying, well, a 99% chance of saving 100 lives that’s got to be better than a certainty of saving one life because it’s 99 people in expectation?
Christian Tarsney: Yeah, so the argument that I’ve been exploring in my work recently, and in particular in this working paper Exceeding expectations looks at what happens to the stochastic dominance criterion when you add in what you might call background uncertainty. Suppose that you’re a classical utilitarian, just for example. So you measure the value of an outcome by the total amount of say happiness minus suffering in the resulting world. And when you make a choice, you’re unsure about two things that we can separate out if we want to.
Christian Tarsney: One is the outcome of your choice. You can think of that as what happens in your future light cone in the part of the universe that you can affect. But you’re also uncertain about how much value there is in the universe to begin with, so in the past or in faraway galaxies or whatever. And it turns out that if you’re sufficiently uncertain about the amount of value that’s in the universe to begin with, then options whose local outcomes, the thing that happens inside your future light cone, has greater expected value — but isn’t in a vacuum stochastically dominant — it becomes stochastically dominant once you add in that background uncertainty.
Christian Tarsney: If you tried to model this numerically in a way that at least seems plausible to me, you get the conclusion that actually this very minimal stochastic dominance criterion, once you account for our background uncertainty about the amount of value in the universe, recovers most of risk-neutral expected value maximization. And for instance can tell us you should save the 100 lives of probability 0.99 rather than one life for sure. While still giving us an out in these extreme fanatical cases.
Robert Wiblin: Okay, interesting. Is there any way of giving an intuitive verbal explanation of why that is, that all of that background uncertainty ends up recovering something closer to just the normal maximize expected value?
Christian Tarsney: Yeah, I can try. For example, well, take that one happy person for sure versus 100 happy people with probability 0.99. If you’re just thinking about that choice in a vacuum and you imagine that there’s nothing else in the universe that you’re uncertain about, you can say, “Well, if I take the sure thing then I’m absolutely guaranteed that the total amount of value in the universe will be at least 1.” If the units are happy people in existence or something. Versus, “If I take the second option, I’m not sure that the universe will be at least that good.”
Christian Tarsney: But when you add in substantial background uncertainty, then you can no longer say that. Because even if you take the apparent sure-thing option, you’re no longer certain that the universe as a whole will have a value of at least 1. Because it could be that the rest of the universe, the part that you can’t affect, is already really bad. And then if you want to think about, okay, so there’s this threshold of 1, say I’m really interested in the universe having a value of at least 1, well, one way in which my choice could bring that about is that the amount of value in the universe to begin with is somewhere between 0 and 1, and then I add this extra unit that puts us over the threshold.
Christian Tarsney: But another way it could happen is I choose the riskier option, and it pays off, which happens with probability 0.99, and the amount of value in the universe to begin with was somewhere between −99 and 1. And so that extra 100 units of value now puts us over the threshold to have more than 1 total value in the universe.
Robert Wiblin: And it puts you over the threshold in far more scenarios, because there’s a wider range there.
Christian Tarsney: Exactly, right. So it is much more likely that the total amount of pre-existing value in the universe will be between −99 and 1 than that it’ll be between 0 and 1.
Robert Wiblin: And there you were using a strict cutoff of the boundary being 1, where it’s no good below and it’s good above, but we can extrapolate the same underlying idea to a wider range of possible outcomes where they have diminishingly increasing value.
Christian Tarsney: Yeah. And what stochastic dominance means is basically you could pick any number you want, 1 or −10, or 1,000, and exactly the same argument will work, that choosing the expectationally superior option will increase your overall probability of ending up with a universe that’s at least that good.
Robert Wiblin: Okay. Now I get it. Now I guess my objection is the other way round. This seems like such a strong argument that it might just bring back fanaticism, because you’ve gotten too close again to just the role of the maximize expected value view.
Christian Tarsney: Yeah, two things to say about that. One is I think if it does, then we’ve got actually quite a powerful argument for fanaticism, because the argument that you shouldn’t choose a stochastically dominated option just seems extremely compelling.
Robert Wiblin: It’s so powerful.
Christian Tarsney: Yeah. There are axiomatic arguments you can give for it including the standard axioms of expected utility theory that people find quite compelling. I think if it just turns out that the fanatical option in a lot of these real-world cases is stochastically dominant, then that’s a better argument than we had before for embracing fanaticism. One of the major motivations for this project is that this phenomenon of background uncertainty inducing stochastic dominance happens really easily when you’re thinking about moderate probabilities of medium-sized payoffs. And then when you holded fixed the expected value of an option but you get that expected value from smaller and smaller probabilities of larger and larger payoffs, as you do that Pascalian transformation on an option it takes more and more and more background uncertainty to make it stochastically dominant.
Christian Tarsney: And so you get this nice phenomenon where if you know what your background uncertainty is, your probability distribution about the amount of value in the universe to begin with, then you can actually set a threshold where you can say, “Well, if I have the option to produce one unit of value with certainty versus a tiny probability of an astronomically good outcome, no matter how astronomically good that outcome is, the probability has to be at least X for me to be compelled, rationally required to do it.” So 10 to the −10 or something, and below that I’m still permitted to do the fanatical thing but I’m also permitted to take the sure thing.
Robert Wiblin: I see. So it will get you fanaticism up until the point where the goodness of the outcome that’s necessary to try to prompt you to be fanatical gets large relative to the background uncertainty about all of the different scenarios of how well the entire future could go. And so once you start getting to universe-scale good outcomes, that’s no longer big relative to the underlying uncertainty that you had, regardless of your actions, about how well things could go. Because it’s now spanning the full range from the best to worst outcome.
Christian Tarsney: Yeah.
Robert Wiblin: And so the stochastic dominance argument no longer applies, and you have a sphere of permissibility.
Christian Tarsney: Yeah, yeah. Very roughly, you’re forced to maximize expected value in most cases where the outcomes that you’re looking at are smaller than, for instance, the interquartile range of your background uncertainty. So the difference between the 25th and 75th percentile of amounts of value there could be in the universe to begin with. One simple way of measuring how uncertain you are, if the local outcomes that you’re considering are much smaller than that, then you’re typically required, under some other conditions, particularly you need sufficiently heavy tails and things like that, but under those conditions you’re required in almost all cases to maximize expected value. But then when you’re dealing with outcomes that are a much larger, potentially, than that interquartile range, or the scale of your background uncertainty more generally, then the stochastic dominance requirement becomes a lot more permissive.
Robert Wiblin: Okay. This seems a very neat potential middle ground where it gets you quite a lot of fanaticism but not so much that it seems to really go off the deep end. But for an individual, it seems it might prompt you to be very fanatical, because we’re all tiny ants just adding little bits of sand to the hill. And so perhaps the effect any one of us in the decision can ever hope to make about the total goodness of the universe relative to the background uncertainty is minuscule, and so in practice maybe is going to spit out a fanatical answer most of the time except in very wacky cases.
Christian Tarsney: Well, unless you think that… Unless what you’re concerned with as an individual is making some tiny difference to the probability that humanity does or doesn’t cross some threshold that makes a difference. For instance existential risks. If you think that if I devote my career to trying to reduce biological risks, say I can individually reduce the probability of premature human extinction by 1 in 1 billion or 1 in 1 trillion or something like that. Then in some sense, you’re still an individual just making a small marginal difference, but that difference takes the form of changing the probability of an astronomically good or astronomically bad outcome. So in that case the stochastic dominance view combined with what I take to be reasonable assumptions about our background uncertainty might say that you’re actually permitted to go either way and opt for the sure thing, work on global poverty or something.
Robert Wiblin: I see. That’s because the probability is sufficiently low that… Or indeed the lower the probability the wider the range of permissibility, because the probability difference is so small relative to the background uncertainty?
Christian Tarsney: Yeah. I mean basically the way to think about it is, you’re getting lots of expected value from, say, reducing extinction risks, because the potential payoff if humanity becomes a grand interstellar civilization or something is so astronomical, say it’s 10 to the 52 happy lives or something. But what the stochastic dominance requirement under background uncertainty forces you to do is treat those increases in the total amount of value in the universe as linear up to roughly the scale of your background uncertainty. But if the scale of your background uncertainty is say 10 to the 15 or 10 to the 20 human lives, then you’re forced to regard ensuring the future existence of humanity as at least good to the degree 10 to the 20. But that’s a lot less than 10 to the 50.
Robert Wiblin: Now, maybe I’m going to seem nuts here. But it seems like one aspect of the background uncertainty is, say, whether there’ll be aliens that will arise at some point in the future and colonize some significant fraction of the universe in our stead, even if we go extinct. Or maybe there are aliens in the past, or aliens outside of the accessible universe somewhere else? Or maybe there’s a lot of uncertainty just about what is of moral value, and how much moral value can exist? Because we don’t know how valuable good experiences are, or how valuable justice is. And so in fact, the amount of uncertainty about the goodness of the future of all of the universe is larger even than what we can directly affect by guiding Earth-originating life.
Christian Tarsney: Yeah, I think that’s totally plausible. So if you think that the universe is really enormous and there are probably other civilizations out there, and however good our civilization might be, there’s at least a substantial probability that there are many, many civilizations already in the universe or far away in distant galaxies that are achieving that same level of value, and we don’t know exactly how much value that is, or how many of those civilizations there are, and so forth… Then, yeah, I think it’s totally plausible that the scale of our background uncertainty about the value in the universe could be many orders of magnitude greater than the potential value of human civilization. But this is the sort of thing where of course what practical conclusions you reach depends sensitively on what numbers you plug in, and this is all pretty subjective. So it’s hard to really pound your fist on the table and say our background uncertainty should have this scale rather than that scale.
Robert Wiblin: Okay, makes sense. Maybe to wrap up this section, what should listeners take away from this if they’re someone who has been trying to themselves grapple with this question of how fanatical to be in their expected value maximization choices, in like, do I work on existential risk or do I work on something with a high probability of benefiting people in the immediate term?
Christian Tarsney: Yeah. Unfortunately my own take, or the take that’s given by my view, is something like, well, it depends delicately on the numbers, and if you think that you can make, say, a 10 to the −10 difference in the probability of extinction, well, then it just depends on exactly how uncertain you are about the value of the universe, and so forth. But a little bit more philosophically, or maybe a little bit more helpfully, I guess, I would say number one it’s worth bearing in mind that it’s not just automatic and axiomatic and beyond dispute that you have to be a kind of naive expected value maximizer. There are good reasons to be skeptical of that. And at least I don’t think it’s unreasonable for somebody to, in sufficiently extreme cases, opt for the sure thing rather than just being led anywhere by any tiny probability of 10 to the 52 future lives or something like that.
Christian Tarsney: But then on the other hand, there are, insofar as this argument about stochastic dominance under background risk is compelling, it means that at least in a lot of ordinary cases where we’re not considering really extreme probabilities, actually the fanatical thing shouldn’t seem so counterintuitive or—
Robert Wiblin: Outlandish.
Christian Tarsney: Right, yeah. Because actually what you’re doing is in some sense quite safe. Whatever target you’re interested in, you’re increasing the probability that the universe as a whole reaches or exceeds that target.
Robert Wiblin: Okay, nice. What’s been the reception to this idea among philosophers? Has it been warmly received?
Christian Tarsney: I would say people have very different responses. I think people generally find the argument and the results interesting. Some people find the crazy view at least worth taking seriously, and other people don’t. I think a common objection that I’ve encountered and that I think is totally reasonable is, well, we have these intuitions — for instance that you should maximize expected value — in ordinary cases, but you’re not required to in these kinds of extreme fanatical cases. And maybe this kind of the stochastic dominance rule combined with our actual empirical background uncertainty is a decent extensional match for our intuitions, but it doesn’t seem super plausible that it really gets at the explanation for our intuition, right?
Robert Wiblin: Yeah.
Christian Tarsney: Because we’re not walking around thinking about our uncertainty about the amount of value in distant galaxies or something like that. So is it really a point in favor of this theory that it captures our intuitions if it’s not capturing them for the right reasons?
Robert Wiblin: Yeah, I guess it captures the conclusion, but for a reason that is not plausibly related to why we actually believe the thing that we believe.
Christian Tarsney: Yeah. Now, I mean, I want to argue, and this is something I don’t do in the existing working paper, and something I still need to work out in more detail, but it does seem to me that there’s actually more of a connection than you might initially think. For instance, if you’re making, say, just self-interested prudential decisions about your money, one good reason to maximize expected value when you’re making small bets is that you have lots of other uncertainty about the rest of your financial future. You’ve faced this long run of other financial choices, and that gets you probably not all the way to stochastic dominance — because for instance your uncertainty about your future income is not probably unbounded — but it means that a very wide range of risk attitudes will agree that you ought to do the expectation-maximizing thing.
Christian Tarsney: And I think people are plausibly sensitive to the fact that you face this long run of future choices, that you face other uncertainty, that adopting a policy of maximizing expected value is extremely likely to pay off more in the long run. I do think there’s some connection here, but I don’t have it fully worked out in my head yet.
Robert Wiblin: Whereas in one-off bets you can’t make that same argument about in the long run over many choices it’s necessarily going to pay off, or very likely to pay off.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Okay. I don’t want to introduce Pascal’s mugging here because that would require us to lay out Pascal’s mugging for those who don’t know it. But I guess, yeah, a savvy listener can think about how this might interact with the Pascal’s mugging case, and we’ll stick up a link to that thought experiment.
Robert Wiblin: Let’s talk now about challenges to longtermism that stem from us not being able to foresee, properly, the effects of our actions on the very long term. For those who are fresh to this topic, what’s the basic trouble here?
Christian Tarsney: Well, so longtermism is, very roughly, the view that what we ought to do in most choice situations — or most of the most important choice situations that we face — is mainly determined by the effects of our actions on the very far future. And the kind of simple, intuitive argument for longtermism is that the far future is just potentially vast. Scales are much greater than the scale of the near future, if you think human-originating civilization could exist for billions of years, or something like that. But there’s this countervailing effect that’s harder to quantify, which is, as we look further and further and further into the future, it gets harder and harder to predict not just what the future will look like, but what the effects of our present actions or interventions will be. And it’s not at all obvious that when you quantify this, the first factor, the scale of the far future, is larger or more powerful than the second factor, the predictability of our future and the difficulty of predictably influencing the far future.
Robert Wiblin: So, yeah, why think that we will be able to predict the consequences of our actions, or really anything that we care about, 100 or 1,000 years in the future? It’s hard enough to predict what effect the things I do are going to have in one month or one year, let alone that far out.
Christian Tarsney: Yeah. So, I think my own response to this at least, is that our ability to predict and predictably influence the future is a matter of degree. And one simple reason to not just throw our hands up and say, “Well, we can’t possibly predict the future more than 100 years in advance,” is, well, we can predict the future, and we can predict the effects of our actions, one year in advance. And well now think about two years, three years, and so forth, presumably it gets harder to predict the future, but it would be weird if there was some point where it just discontinuously went to zero. Where our ability to predictably influence the future went from non-zero and not that great, but we can have some predictable effect, to all of a sudden you can have no predictable effect at all.
Christian Tarsney: So I think the right way to think about this and model this is that our ability to predictably influence the far future decreases. And we want to understand exactly what that means and the rate at which it decreases. The other answer from a different angle is that we can plausibly have predictable effects on the very long-run future if we can have effects on the nearer future that are persistent. The most obvious example is if we can make the difference between humanity surviving or not surviving the next century, plausibly, if we survive the next century of the next millennium, our civilization has at least a non-trivial chance of persisting for many thousands of years, maybe millions of years, maybe even billions of years.
Christian Tarsney: And on the other hand, if we don’t survive the next century, it’s very plausible that no civilization is going to exist on Earth, maybe for the rest of time, certainly for millions and millions of years. And so all you need to be able to do to have a predictable effect on the very long-run future, or at least to have some non-trivial chance of an effect on the very long-run future, is to affect the medium-term future in ways that are persistent.
Robert Wiblin: Yeah. As you were talking about how the uncertainty gets bigger and bigger year after year… I wonder whether the rate at which the uncertainty increases kind of declines over time, because you think, imagine if I did something and it successfully had a positive impact one year from now, and two years from now, and three years from now, it’s possible that it will flip in the fourth year and will go to zero, or become negative. But it seems like the rate at which that kind of flipping thing happens should decrease the longer the effect has been positive. To some extent, the uncertainty about whether it was a good or bad thing might decline over time.
Christian Tarsney: Yeah, I think that’s probably right. We haven’t yet asked the question of how do you quantify the rate at which your ability to predictably influence the far future declines, but a natural way to quantify it is you want to put the world into some desirable state, call it S, rather than a not-desirable, or less-desirable state, not-S. And you want to know, if I can make the difference, say, in the next 10 years, or 100 years, or something between the world being in state S and not-S, how likely is it that my action will still be making the difference or still determining the state of the world 1,000 years or 1 million years or something from now? And the kind of straightforward way to model that is maybe it declines exponentially. So, maybe there’s a 1% chance that I can make the difference in the next 100 years.
Christian Tarsney: And then after that, for every century, there’s a 50% chance that something’s going to come along where my action no longer makes the difference, right? So, that would be the kind of constant rate of fall-off, or would produce, in effect, an exponential discount rate on the value of our interventions. But it could be that, for instance, the probability that some exogenous event comes along and spoils the effect of my intervention does decline with time. But it nevertheless declined slowly enough that this discounting effect is still quite significant, and really cuts into the expected value of trying to influence the far future.
Robert Wiblin: Yeah, I guess that could happen two different ways, or maybe these are just the same way, but if the positive effect has lasted 100 years, then maybe we’ve learned something, that that was a robustly positive intervention. And so it should be expected to be robustly positive in future centuries as well. Another thing might be that humanity could go through some transition to a far more stable state where things are less chaotic, and if they had a positive effect up until the beginning of that more stable situation, which we’re in, then it’s just going to persist being positive for as long as that stable situation persists after that.
Christian Tarsney: Yeah, that’s right. So, I mean, one way of arguing for longtermism in the face of these epistemic worries is to say, well, there’s at least a non-zero probability that, again, in the case of existential risk, that if our civilization survives the next 1,000 years, then all the dangers are behind us. We’ll be multiplanetary, there’s just no chance of these exogenous events coming along. And so we’ll survive until the heat death of the universe or something. And on the other hand, if we don’t survive, then maybe the rate at which life arises on planets is just so low that there’s essentially no chance that another civilization will ever replace us.
Christian Tarsney: And so at least if you have sort of, well, say, 1 in 1 billion credence in this hypothesis that says that the effects of avoiding or causing existential risk will be persistent until the heat death of the universe or something like that, then that’s enough to generate enormous amounts of expected value. But then we’re back to these worries about fanaticism, right? That so much of the expected value of existential risk reduction is coming from this maybe very improbable hypothesis of unlimited persistence.
Christian Tarsney: Yeah, so the purpose of that paper basically is to do some relatively simple, and as I think of it, kind of preliminary modeling of the relative weight of these two competing forces. So the way that I imagined things in that paper is, as I described it a moment ago, we want to put the world into some more desirable state rather than a less desirable state. And there’s some chance that you can succeed in doing that in the medium-term future, say the next 100 or 1,000 years. And then what you want to know is, what’s the probability that that effect will persist for a given length of time? And you imagine that there are two kinds of exogenous events that could come along. One is negative exogenous events, where you managed to put the world into the more desirable state, but then 1 million years later, 10,000 years later, or something, some event comes along that puts the world into the less desirable state anyway. So for instance, humanity goes extinct anyway.
Christian Tarsney: And then the other possibility is, say we fail to put the world into the more desirable state maybe because we focus on the short term instead of focusing on existential risk, and we go extinct, but then nevertheless, some event comes along later, like another intelligent civilization arises on Earth. And so by 1 million years from now, civilization’s back in business or something like that. The goal of the paper is… I think this is inevitably a quantitative exercise, to figure out what these kinds of considerations do to the expected value of the far future. But I tried to bite off a little piece of this and say, well, let’s suppose we’re total consequentialists. So we grant some normative assumptions that are favorable to longtermism and let’s suppose we’re just happy to be naive, expected value maximizers.
Christian Tarsney: So, I’m setting aside these worries about fanaticism for most of the paper, but then I’m going to try to examine the set of empirical assumptions or empirical beliefs or worldview that you might have that’s kind of least favorable to longtermism, within reason. And that means, among other things, thinking that maybe there is some irreducible rate at which these exogenous events come along, like an ineliminable minimum chance of extinction per century that produces this kind of permanent exponential discount rate on the effects of our actions. And so the purpose of the paper is to kind of model, if humanity either is just existing in a good state for some indefinite period of time, or maybe we’re expanding, we’re settling more of the universe, but there’s also this ineliminable possibility of exogenous events coming along, does the expected value of attempts to influence the far future — for instance by existential risk reduction — still look so good in comparison to the expected value of attempts to improve the more short-term future?
Robert Wiblin: Yeah. Okay. So, yeah, earlier I was saying, well, if you make some positive intervention, then that might get washed away in future, but the rate at which it washes away probably declines over time. Or there’s some reason to think that might be true. And here you’re considering a relative kind of worst-case scenario where it’s the case that the rate of your actions being washed away in the future just remains constant. So every 100 years, the odds of something great that you accomplished just becoming irrelevant remains, say 1%, 10%, 15% or whatever it may be. In which case you get this geometric decay on the value that it provides over time. And then you’re thinking, well, in that fairly dismal scenario, is it still worth focusing on the very long term? Or do longtermist projects then get dominated by things that have a bigger impact on the immediate term?
Christian Tarsney: Yeah. That’s right. I’m trying to do two things. One is to develop a fairly general model where you can think about any longtermist intervention you want under the rubric of more-desirable state, less-desirable state, and how long does that effect persist. But then the case that I’m trying to actually address numerically is the expected value of existential risk reduction. And in that case, the conclusion that I reach, and I think this is all sort of… There’s an ineliminable level of subjectivity here, and so other people should take a crack at this and see what conclusions come to you, but the conclusion that I come to anyway, is that even when you make what look like the most pessimistic assumptions for longtermism within reason on these kinds of empirical questions, nevertheless, the expected value of existential risk reduction still looks quite good in comparison to short-termist alternatives.
Robert Wiblin: Okay. Yeah. Is there any way of kind of summing up all the empirical ingredients that go into that pie?
Christian Tarsney: Yeah. Well, the short answer is there’s a whole bunch of parameters in this model, each of which makes some difference, but probably the most important things you have to think about are, number one, how much of a difference can I make to the probability of the better outcome rather than the worse outcome? So humanity surviving rather than going extinct being realized in the medium term. Where what I do in the paper is a very kind of Fermi estimate-style thing where I just say, “Well, imagine that human civilization as a whole focused on nothing but ensuring its own survival, every waking minute of every human being’s day for the next 1,000 years, how much could we change the probability that we survive?” And I say, “Well, surely at least 1%”, and then, okay, what proportion of humanity’s work hours over the next millennium, say, can you buy for $1 million?
Christian Tarsney: And if you assume that the returns to existential risk reduction at least aren’t increasing… We assume typically things have diminishing returns, so, to be as pessimistic as possible, assume you have constant returns, and then that allows you to get a lower-bound estimate of how much you can change the probability of premature extinction. So then the second important empirical question is, how good do you think the survival of humanity would be? And that depends a lot on, are you thinking about a scenario where we just remained Earth-bound for say 500 million years until the sun gets too hot? Or a scenario where we expand into the universe, and so the value of human civilization is growing presumably cubically in time because we’re expanding in spatial dimensions?
Christian Tarsney: So, I consider both of those possibilities, and I try to make conservative assumptions about what the welfare of future people will be like. And then the final crucial piece is, what do you think the ineliminable long-term rate of exogenous events coming along is going to be? And again, in the spirit of just testing the robustness of longtermism and making the most pessimistic assumptions within reason, I say well, let’s take what look like the most pessimistic, reasonable assumptions about the next century. And probably the most pessimistic thing you could reasonably believe is that that represents the ineliminable long-term rate at which existential catastrophes come along. And that’s maybe, at the absolute outside, maybe it’s 1% a year. It’s probably less than that. So, what I conclude in the paper is, if you make sufficiently pessimistic assumptions about, for instance, the long-term rate at which existential catastrophes come along, ineliminably, you can come up with empirical assumptions that, if you really commit to them, will really cut the expected value of say existential risk reduction down to something trivial.
Christian Tarsney: So, for instance, if you think there’s an ineliminable 1% risk of existential catastrophe every year for the rest of time, right? But once you start counting for uncertainties about parameters in the model… For instance, okay, I have at least some credence that far-future civilization will be more stable or more secure than that, that the annual rate of existential catastrophe will be only say one in 10,000 or one in 100,000, one in 1 million or something like that. And maybe I’m skeptical that humanity will ever settle the stars, but I think there’s at least a one in 1,000 or one in 1 million chance that we will. And maybe I think there’s a one in 1,000 chance of these really utopian scenarios where we manage to just produce astronomically more happiness per unit resource or per star system than our current technology allows us to, and you only need a little bit of credence in those more optimistic assumptions to get the case for existential risk reduction back on track, at least within the framework of expected value maximization.
Christian Tarsney: But then you end up back at this point, which motivates my interest in fanaticism, where, honestly, I think it’s just quite hard to figure out, at least without a ton of subjective guesswork, how much does the expectational case for say working on existential risk reduction depend on these very extreme tail probabilities. But it’s at least prima facie plausible that it does. If you’re very skeptical about the scenario where far-future human civilization is extremely stable and expanding into the stars and producing enormous amounts of happiness — there are reasonable people who think that that’s just an outlandish possibility and assign it a small trivial probability — yeah, it could turn out that the expectational case for existential risk reduction is really driven by that trivial probability you assigned to what you see as an outlandish scenario.
Christian Tarsney: And so I think where this exercise leaves me is thinking, insofar as we’re happy to be expected value maximizers for whatever reason, the case for prioritizing at least existential risk reduction, maybe the long-term future more generally, looks pretty robust. But insofar as we have these residual worries about fanaticism, there is a kind of question mark, where those worries about fanaticism combine with our epistemic worries about the far future to produce some residual discomfort, potentially, with longtermism.
Robert Wiblin: So to kind of repeat that back, it sounds like if you’re someone who just really feels very unsure about what are the odds that a group of people really trying to reduce the risk of extinction are going to succeed, and if you’re really unsure about how large could human civilization become in future, or Earth-originating life, like maybe it can go to space, maybe it can’t, no idea, and if you’re also unsure about like, whether we’ll ever be able to achieve some kind of stable state where extinction is now very unlikely, then all of that uncertainty kind of means that there is some reasonable possibility that we will get to this stable, very positive and very big state. And so that uncertainty means that there’s a strong case for working to try to achieve that outcome by reducing the possibility of some catastrophe that would take us off track now.
Robert Wiblin: And to get around that, you kind of have to say, no, I’m really sure that we can’t reduce extinction now, and I’m really sure that we’re never going to achieve a stable state. And I know we’re never going to get off Earth, or we’re never going to leave the solar system. Things that, I guess, some people claim, but I don’t really know what the basis would be for being so confident about any of those claims, to be honest, no one of those three seems plausible to me really, but someone who was committed to those empirical views, they would have a strong case against potentially working on longtermism.
Christian Tarsney: Yeah, I think that’s right. I mean, I share your intuition, not from the perspective of any of this empirical stuff being my real expertise, but it does seem just very strange to me to not assign some substantial probability to humanity eventually settling the universe and living in ways that are radically different and maybe radically better than the way we live today. But part of the purpose of this exercise is to say, well, there are these smart, apparently reasonable people who really do find these scenarios outlandish, and they assigned at least most of their credence to the more kind of mundane Earth-bound scenarios, what the far future will look like. And should those people be longtermists, particularly when you throw these epistemic worries into the mix?
Robert Wiblin: Yeah, I guess it seems like you could combine this paper with the fanaticism one to get some kind of middle-ground thing where maybe you have to discount or chop off the most extreme biggest outcomes, because they would be large relative to the background uncertainty. But then maybe if you kind of bake this cake altogether, you end up with some moderately strong case or moderately robust case in favor of working on really longtermist projects.
Christian Tarsney: Yeah. This is all very back of the envelope, and subjective, but it seems to me, and I make some argument for this in the paper on stochastic dominance, that our background uncertainty is at least great enough that we should be fanatical or expected value maximizers roughly out to probabilities of like one in 1 billion, or something like that. And then in the context of these epistemic worries, if you have at least a one in 1 billion credence in these scenarios that permit extreme persistence where far-future civilization will be extremely stable, for instance, then that’s enough to not just make the expectational case for longtermism, but make the more robust case on the basis of mere stochastic dominance.
Christian Tarsney: So, yeah, I guess I would say if you have less than a one in 1 billion credence in the more optimistic high-persistence scenarios, or you have less background uncertainty than that argument presupposes, I would view that as unreasonable overconfidence. I think many people would. But, again, a lot of this comes down to subjective judgements about what reasonable probability assignments are.
Best arguments against working on existential risk reduction [01:32:34]
Robert Wiblin: Yeah. For people who are ethically inclined towards longtermism as a kind of practical, moral principle, what are the best arguments against working on things that look like existential risk-related or longtermist-related projects in practice?
Christian Tarsney: Well, I usually say for my own part, I’m fairly sold on the idea that existential risk should be high on the list of priorities for longtermists. And one reason for that is I think that that’s where we have the clearest argument for potentially extreme persistence. So when we’re thinking about other things like changes to institutions or norms or values, maybe those changes will persist for a very long time, but it seems much more plausible to think that they’ll eventually wash out, or they would have happened anyway, or something like that. But if you wanted to make the case for putting existential risk kind of lower on the list of longtermist priorities, the most straightforward argument is just to contest the assumption that the survival of humanity is very, very good in expectation.
Christian Tarsney: So, of course, you might think, well, for instance, if you’re the extreme case, if you’re something like a negative utilitarian, you think, well, we only care about minimizing suffering. And if humanity survives for a very long time, maybe all we’ll do is just spread suffering to the stars and that’ll be terrible. So, of course, that’s a reason for, well, not just not trying to minimize extinction risks, but maybe hoping that humanity goes extinct, or something like that.
Robert Wiblin: Or I suppose, perhaps trying to focus on preventing those worst-case scenarios, which might involve being kind of neutral on extinction perhaps, but focusing on how things could become negative in value.
Christian Tarsney: Yeah, that’s true, too. But even if you don’t think of yourself as a negative or a negative-leaning consequentialist, something that I think a number of longtermists believe is roughly that the modal case for where humanity survives is one where maybe things are better than break even in expectation, but we still achieve only a tiny fraction of our potential value. So maybe we have dysfunctional social institutions, or people never acquire the right values, or the true moral beliefs, or something like that.
Robert Wiblin: Or we’re just not ambitious enough.
Christian Tarsney: Yeah, yeah. Imagine, for instance, that the far future… By default, the modal scenario is kind of like human civilization today, where at least if you set aside worries about factory farms and wild animals, and just think about human beings, plausibly we’re like a little bit above break even, probably most people’s lives are worth living, but we’re not all ecstatic all the time or something. And maybe the modal scenario is that this continues, just with fancier technology. Then you might also think, well, there is this other possibility out there where we just achieve astronomical levels of happiness and value and in one way or another optimize the universe for value, and making even a very small change to the probability of a future optimized for value versus the modal mediocre future has greater expected value than reducing the probability of existential risk, which just for the most part increases the probability of that mediocre future.
Robert Wiblin: Yeah. That makes sense. I kind of think I believe that. And it’s maybe something we should talk about on the show a little bit more, that there might be more value in kind of getting people to raise their vision for how amazing the future can be, and not aspiring, merely to kind of survive and persist in what I would say is the kind of mediocre situation that we’re in now, where it’s like not even really clear whether there’s more good things than bad things in the world. Plausibly there is, and plausibly there’s not. But really what we should be aiming for is something where it’s just like, so astronomically clear that the universe is an amazing place and like everything that’s… The vast majority of stuff that’s going on is fantastic. I think, maybe among people I know, many people have that vision for something that’s extraordinarily astronomically good. But I think that’s not a mainstream cultural idea, and I would feel a lot better about the future if it were.
Christian Tarsney: Yeah, I think that’s right. And I do find it certainly plausible that longtermists should diversify their portfolio to some extent between increasing the probability of the survival of humanity versus increasing the expected value of human civilization, conditional on survival. But I guess one thing I’m inclined to think, and I don’t think I have any amazing arguments to back this up, but I’m inclined to think that insofar as putting humanity on the track towards utopian future, a future optimized for value, insofar as that’s tractable, insofar as that’s something we can do much about, it’s also something that’s probably likely to happen anyway. So if you’re a moral realist and you think that there are real-world truths out there, and those truths are discoverable, and that agents, when they apprehend the moral truth, are at least sort of asymmetrically motivated to do good things rather than bad things, then plausibly in the long run, good moral values will be discovered and their influence will propagate, and we’ll make our way towards utopia.
Christian Tarsney: And if you don’t think that, if you think our motivational systems are fine-tuned by evolution to do something other than pursue the good, e.g. to maximize reproductive success or something like that, and even if there is a moral truth out there, in the long run attempts to persuade people of the moral truth are not going to have a enormous global influence on people’s behavior because we’re all just going to, in the long run, be reproductive fitness maximizers or something like that… I guess the thought is that the path to utopia is either kind of inevitable or nearly impossible, or something like that.
Robert Wiblin: Oh, interesting. Okay. So this is kind of an argument that it’s not very tractable because there’s going to be strong underlying tendencies for people to adopt or not to adopt particular values and goals. And just trying to make moral arguments… Either they do work, in which case they’ll work at some point anyway, or they’re going to fall on deaf ears and it’s not going to work regardless of whether you personally try or not.
Christian Tarsney: Yeah. And I shouldn’t overstate the extent to which I believe this. I mean, I think there’s certainly… It’s not unreasonable to have some credence in a middle ground where actually we can make a difference. Particularly if you think values or motivations or something are going to get locked in at some point, maybe when we achieve superintelligence or something like that.
Robert Wiblin: Yeah. I mean, I’m not sure that I find this that intuitively probable. Just like looking around culturally at different civilizations, different cultural groupings, both like different ones that are around today and different ones that have been around throughout history… It seems like, yes, there are particular things that are in common, and are quite unusual not to have, but then with people’s discretionary budgets, the ways that they express themselves and express their values, you see quite a bit of variation in what people choose to do with that slack. And it’s influenced by philosophical arguments as well as religion and culture and tradition and all of that. And so it seems like maybe that stuff is difficult to shift around because that kind of culture is fairly persistent, but inasmuch as you think that you have a good argument and have persuaded some people, maybe other people will be persuaded if they hear it as well.
Christian Tarsney: Yeah. I mean, I find that perfectly plausible or at least worth entertaining, but I think there are two stories you can tell about the historical record that correspond to the two prongs of this dilemma I was describing. So one prong is the kind of moral progress story, where slowly, and unevenly, kind of in fits and starts, we’ve been inching our way towards the moral truth. And there’s plenty of diversity, number one because there’s a million ways to be wrong and only one way to be right. So insofar as we haven’t reached the moral truth yet, we have different moral errors that we’ve fallen into. And number two, insofar as different cultures are making progress along that path towards the moral truth at different rates or something like that, so, if some cultures at some point in the 19th century accept slavery and others don’t, well that’s moral diversity. But it doesn’t defeat the idea that we’re all ultimately progressing towards the moral truth that slavery’s bad, or something like that.
Christian Tarsney: And then the other story you can tell is the kind of hard-nosed Darwinian story where what’s happened in the last say 2,000 or 3,000 years is just that we’ve been out of evolutionary equilibrium in this weird way, where we have motivational systems that were fine-tuned for our ancestral environment. And suddenly we’re thrust into this new environment where our motivations could maybe go off in weird directions and there aren’t strong selection pressures because, well, we have things like an agricultural surplus, that means that even people making weird non-fitness-maximizing choices, their lineages can survive for a while. But in the very long term, maybe we’ll end up back in some Malthusian trap, or maybe we’ll end up with some evolutionary competition between artificial superintelligences or something. And then evolution will take back over, and you’ll just get motivational systems that are optimized for something like reproductive fitness.
Robert Wiblin: Alright. Let’s push on to briefly discuss another longtermist-related issue, this essay that you and Hilary Greaves are working on at the moment about what you call the scope of longtermism, which is roughly how much of all of humanity’s resources could we or should we spend on improving the long-term future before the marginal returns on spending more diminish so much that spending any more on longtermism beyond that would be a mistake, or at least no better than what else we could do. I know you’re still in the process of thinking about this and talking about this and writing it up, but how are you analyzing the questions? And do you have any preliminary ideas?
Christian Tarsney: Yeah, so I would say we certainly haven’t reached any conclusions, and the purpose of the essay is more to raise the question and get other people thinking about it and do a survey of some possibilities. There are two motivations for thinking about this. One is that a worry that I think a lot of people have — certainly a lot of philosophers — about longtermism is that it has this flavor of demanding extreme sacrifices from us. That maybe, for instance, if we really assign the same moral significance to the welfare of people in the very distant future, what that will require us to do is just work our fingers to the bone and give up all of our pleasures and leisure pursuits in order to maximize the probability at the eighth decimal place or something like that of humanity having a very good future.
Christian Tarsney: And this is actually a classic argument in economics too, that the reason that you need a discount rate, and more particularly the reason why you need a rate of pure time preference, why you need to care about the further future less just because it’s the further future, is that otherwise you end up with these unreasonable conclusions about what the savings rate should be.
Robert Wiblin: Effectively we should invest everything in the future and kind of consume nothing now. It’d be like taking all of our GDP and just converting it into more factories to make factories kind of thing, rather than doing anything that we value today.
Christian Tarsney: Yeah, exactly. Both in philosophy and in economics, people have thought, surely you can’t demand that much of the present generation. And so one thing we wanted to think about is, how much does longtermism or how much does a sort of temporal neutrality, no rate of pure time preference, actually demand of the present generation in practice? But the other question we wanted to think about is, insofar as the thing that we’re trying to do in global priorities research, in thinking about cause prioritization, is find the most important things and draw a circle around them and say, “This is what humanity should be focusing on,” is longtermism the right circle to draw?
Christian Tarsney: Or is it maybe the case that there’s a couple of things that we can productively do to improve the far future, for instance reduce existential risks, and maybe we can try to improve institutional decision making in certain ways and other ways of improving the far future, well, either there’s just not that much we can do, or all we can do is try to make the present better in intuitive ways. Produce more fair, just, equal societies and hope that they make better decisions in future.
Robert Wiblin: Improve education.
Christian Tarsney: Yeah, exactly. Where the more useful thing to say is not, we should be optimizing the far future, but this more specific thing, okay we should be trying to minimize existential risks and improve the quality of decision making in national and global political institutions or something like that.
Robert Wiblin: There’s two things there. One is a demandingness issue, of should it be that we should spend almost all of our time trying to improve the very long-term future and hang the present? And maybe also it sounded like you were alluding to the idea that maybe there might, after we’ve spent some amount of resources doing things that are specifically for the long-term future, there might end up being quite a degree of alignment between stuff that makes the long term go well and things that make the present go well. Because the way to figure out how to improve decisions that we’ll make in the future with the institutions we have now is probably just to improve those institutions and make people more reasonable and informed and better able to make decisions now, and then hope that that will carry forward. And so doing things that would make things look better in 100 years or 1,000 years might just end up looking awfully similar to just trying to improve how things are being run today.
Christian Tarsney: Yeah, exactly. You could think of there being a spectrum between radical longtermism and subtle longtermism, where radical longtermism, for instance if you really took seriously the idea that it’s all about maximizing the growth rate, maybe so that we can start launching our space probes as soon as possible and minimize astronomical waste, get to all the stars before they vanish beyond the astronomical horizon… And so then the thing that we should be doing right now, the kind of longtermist thing to do, as you described, is making factories to make factories. Every waking moment should be about launching those first space probes as quickly as we can. That’s the radical longtermism.
Christian Tarsney: And then the more subtle longtermism says things like, “Well, the far future is very important. We don’t know exactly what challenges we’re going to face, what choices we’re going to have to make, so the best thing we can do right now is try to equip people 50 or 100 years from now to face those challenges and make those choices better.” And for instance, one way we can do that is by trying to improve things like social capital and social trust. Societies where there’s a higher level of trust have more effective institutions. People are more willing to make sacrifices for the common good. For instance, they’re more willing to wear masks during a pandemic, say, and that just has all sorts of unexpected payoffs in all sorts of situations. What we should really be doing is, for instance, trying to make existing societies more fair and just and equal in order to improve social trust and social capital, and that’s going to have all these downstream payoffs in terms of how we face these unexpected challenges.
Christian Tarsney: And there, that’s not to say that longtermism is false or that longtermism makes no difference in practice, because it might be, if we were just thinking about the next 100 years, we should be focused on factory farmed animals or something like that. Maybe thinking about the long-term future is shifting our focus between one of the things we might intuitively be doing and another, but the result is not this crazy, demanding…
Robert Wiblin: In a way that’s very recognizable.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Yeah. Yeah. Interesting. Do you have any kind of preliminary conclusions? Would you like to guess where you might come down on these?
Christian Tarsney: Yeah, the second thing. I think nothing that I’d want to call a preliminary conclusion yet. I guess my own intuition is in that subtle longtermist direction. That we should think of the future as very important, but also very unpredictable. And that means that most of what longtermists should be doing, apart from a few obvious cases like reducing existential risks, is trying to equip the next few generations to make choices better. And that involves mostly… I think this is a interesting question. If we want people 100 years from now to respond to challenges better, are the things that we’re going to do to achieve that end mostly things that will make people between now and then better off?
Christian Tarsney: Or maybe we should subject the next couple of generations to lots of adversity so that they’re forced to build character and learn how to confront… Maybe we should put them all in Hunger Games scenarios so that they can survive if humanity is on the brink or something. But intuitively I think more prosperous, more fair, more just, more equal societies are just likely to handle challenges better. And so there’s a kind of natural connection between trying to improve our ability to respond to challenges and just trying to improve the lives of people alive today and in the near future.
Robert Wiblin: I guess the kind of natural middle ground view is maybe humanity could spend $1 trillion each year on stuff that is quite targeted at the long term, things related to nuclear weapons and dangerous new technologies and building friendships between countries so they don’t go to war, and things like that. But then, after we’ve soaked up all that learning and returns, what’s left is stuff that is extremely recognizable, like trying to get governments to work better, and making better decisions, and generally making sure that people will know what’s going on in the world, and all this other stuff that we were kind of doing anyway, not maybe quite as much and we should, but it doesn’t look really at all peculiar.
Christian Tarsney: Yeah. I think that’s right. Although another thought that’s worth adding to the mix, and this is an observation that I’m stealing at least proximately from Carl Shulman, is that even that $1 trillion, the stuff that you’re spending on nuclear weapons and biological risks and so forth, it may not be that difficult to justify from a short-termist perspective. If you’re just thinking about the next 100 years, there’s already a pretty compelling case for worrying about nuclear weapons and biological risks and even artificial intelligence. Even there, maybe it is just longtermism reshuffling the list of the top 10 priorities.
Robert Wiblin: Converges with common sense. Yeah, or taking something that was the tenth priority and making it the fifth or taking something that’s the third and moving it to the first. This was all stuff that we really should have been focusing on if we were smart anyway. I probably should stop describing all of this stuff as peculiar because I’m not really sure that any significant fraction of people think that trying to prevent a pandemic or trying to prevent nuclear war actually is peculiar. Actually it is very common sense.
Christian Tarsney: Yeah, it seems in practice like one of those things where the big challenge is just to get people marching in the direction that everybody agrees is the direction to march.
Robert Wiblin: Another quick thing that I know you’ve been thinking and talking about at GPI is about how large the expected value of the continued existence of human-originating or Earth-originating civilization might be. As I understand it, you’ve been looking at historical trends in how well the world is going and how well we’re cooperating and things like that. And maybe also thinking, is there a tendency towards things being good rather than bad because intelligent agents that are capable of dominating a planet are maybe more likely to go out and pursue the goals they have and try to make things better, rather than just go out and engage in wanton destruction? We probably wouldn’t last very long if things were like that. Obviously a very speculative area, but what considerations have featured prominently in those discussions?
Christian Tarsney: Yeah, I think those two threads that you’ve identified have probably been among the things that we’ve been most interested in. There is this kind of outside view perspective that says if we want to form rational expectations about the value of the future, we should just think about the value of the present and look for trend lines over time. And then you might look at, for instance, the Steven Pinker stuff about declines in violence or look at trends in global happiness. But you might also think about things like factory farming, and reach the conclusion that actually, even though human beings have been getting both more numerous and better off over time, the net effect of human civilization has been getting worse and worse and worse, as we farm more and more chickens, or something like that.
Christian Tarsney: I’ll say, for my part, I’m a little bit skeptical about how much we can learn from this, because we should expect the outside view, extrapolative reasoning makes sense when you expect to remain in roughly the same regime for the time frame that you’re interested in. But I think there’s all sorts of reasons why we shouldn’t expect that. For instance, there’s the problem of converting wealth into happiness that we just haven’t really mastered, for instance, because, well, we don’t have good enough drugs or something like that. We know how to convert humanity’s wealth and resources into cars. But we don’t know how to make people happy that they own a car, or as happy as they should be, or something like that.
Christian Tarsney: But that’s in principle a solvable problem. Maybe it’s just getting the right drugs or the right kinds of psychotherapy or something like that. And in the long term, it seems very probable to me that we’ll eventually solve that problem. And then there’s other kinds of cases where the outside view reasoning just looks clearly like it’s pointing you in the wrong direction. For instance, maybe the net value of human civilization has been trending really positively. Maybe humanity has been a big win for the world just because we’re destroying so much habitat that we’re crowding out wild animals who would otherwise be living lives of horrible suffering. But obviously that trendline is bounded. We can’t create negative amounts of wilderness. And so if that’s the thing that’s driving the trendline, you don’t want to extrapolate that out to the year 1 billion or something and say, “Well, things will be awesome in 1 billion years.”
Robert Wiblin: Yeah. I see. Interesting. I think it is quite possible perhaps that humanity has overall been negative because of all of the suffering that we’ve created in factory farming. And I guess other very negative places too, perhaps prisons, there’s an enormous amount of suffering there. And in these very specific locations that’s enough to outweigh the broader, mild good that the rest of us get. But then you would have to extrapolate that… If you do this extrapolation, you’re going to end up assuming that in 1,000 years time, we’re just going to have 1,000 times as many animals or something like that in factory farms, which just seems extremely improbable given that it’s such an already borderline-outmoded technology, so why on earth would you project that forward or that way?
Christian Tarsney: Right. The overall trendline is being driven by the one phenomenon, where that one phenomenon could just easily go away in 100 years, maybe for just boring technological and economic reasons. Again, it seems like extrapolating too far out in the future to me at least looks like a mistake.
Robert Wiblin: Yeah. And what do you think of the idea that we should think that agents that are smart enough to exist in huge numbers probably also are smart enough to satisfy their preferences and maybe do things that are moral rather than just things that are randomly good and bad?
Christian Tarsney: Yeah. I think there’s two possible arguments there. One is the idea that agents generally tend to pursue their own good, and the universal good is just something like the sum of individual goods. And so if maybe my actions tend to promote my good and just be neutral for everybody else’s good, and similarly for everybody else, all else being equal, you would expect the future to be good rather than bad because each agent individually tends to make their life good rather than bad. And maybe if we’re sufficiently good at communicating and coordinating and maybe as we get more intelligent, we’ll be able to bargain and trade more and more efficiently, and then maybe a civilization full of self-interested agents could—
Robert Wiblin: Do all right.
Christian Tarsney: Yeah. That relies on some assumptions, for instance, that you remain in a situation where there’s something like a kind of parity of power between most of the individuals you’re thinking about. Or maybe we shouldn’t say remain in that situation, because we were just talking about factory farming. That’s an example of… Maybe in the human economy you have a bunch of agents who are each mostly self interested, but they’re constrained by other people’s ability to do them harm or something, or constrained by a legal framework that they’ve all agreed to. And that means that they are collectively able to reach efficient outcomes but then there’s this other set of beings who are just totally powerless.
Robert Wiblin: They get screwed.
Christian Tarsney: Yeah, exactly. And so maybe the future will be like that. And so the fact that we’re all pursuing our own good is no guarantee that things will turn out well from the point of view of the universe. But then there’s this other thought that maybe we have some general motivation to pursue not just our own good, but The Good. One way of thinking about this is maybe there’s something natural about empathy, or at least something more natural about empathy than… Empathy is in some sense more natural than sadism. And certainly if you think that that tendency to care about the interests of other beings and their welfare, that that becomes stronger over time and maybe we become… Our moral circle expands, and maybe as we get richer and better able to satisfy our own needs, we’re more able to turn our attention to other people’s needs. Then that would be a reason to think maybe more generally or more robustly that the future will be good.
Christian Tarsney: But I think there’s something paradoxical about this, because on the one hand, it seems very strange to think that there is such a thing as The Good, there are real values out there, and they’re knowable, but there is no asymmetric tendency to… It’s just as possible to end up in a civilization where most people are actively motivated to pursue the bad instead of the good. It just seems intuitively obvious that there is such a thing as The Good, that we have some asymmetric tendency to pursue the good rather than the bad. And on the other hand, if you put on your hard-nosed scientist hat, that just feels crazy that there would be this… Well, you can imagine motivational systems that are optimized for anything. You can imagine an agent with any utility function you want. A reinforcement learning agent that has whatever conceivable reward function that they’re optimizing for, so why should there be written into the universe this law that more agents tend to be motivated this way than the other way?
Robert Wiblin: Yeah. I guess we need to bring in the evolutionary psychologists.
Robert Wiblin: This actually very nicely leads into the next section, which is going to be about moral uncertainty, which has been one of your main research interests over the years. We’ve talked about it a couple of times on the show, but can you just quickly recap the problem of moral uncertainty?
Christian Tarsney: Yeah, so a lot of effort in philosophy and economics and elsewhere has gone into thinking about how we should respond to uncertainty about the state of the world, about empirical questions. That’s most standard decision theory. But until recently, much less effort has gone into thinking about how we should respond to uncertainty about basic normative questions, about what things are good or bad. When people talk about moral uncertainty in this context, in this literature, what they mean is uncertainty about those fundamental value questions. Is the good that we should be pursuing happiness? Or preference satisfaction? Or human perfection, or something like that?
Robert Wiblin: Justice.
Christian Tarsney: Right.
Robert Wiblin: Equity.
Christian Tarsney: Yeah, or should we be maximizing the total amount of value in the world, or the average, or whatever? In roughly the last 20 years, philosophers have really started to take this seriously and try to extend standard theories of decision making under empirical uncertainty to fundamental moral uncertainty. And then also started having this debate about whether you should actually do that, and whether moral uncertainty is the thing that we should care about in the first place.
Robert Wiblin: Yeah. Two angles that people come at this puzzle with are called externalism and internalism. Can you explain what those two views are and how they relate to moral uncertainty?
Christian Tarsney: Yeah, so unfortunately, internalism and externalism mean about 75 different things in philosophy. This particular internalism and externalism distinction was coined by a philosopher named Brian Weatherson. The way that he conceives the distinction, or maybe my paraphrase of the way he conceives the distinction, is basically an internalist is someone who says that normative principles, ethical principles, for instance, only have normative authority over you to the extent that you believe them. Maybe there’s an ethical truth out there, but if you justifiably believe some other ethical theory, some false ethical theory, well, of course the thing for you to do is go with your normative beliefs. Do the thing that you believe to be right.
Christian Tarsney: Whereas externalists think at least some normative principles, maybe all normative principles, have their authority unconditionally. It doesn’t depend on your beliefs. For instance, take the trolley problem. Should I kill one innocent person to save five innocent people? The internalist says, suppose the right answer is you should kill the one to save the five, but you’ve just read a lot of Kant and Foot and Thompson and so forth and you become very convinced, maybe in this particular variant of the trolley problem at least, that the right thing to do is to not kill the one, and to let the five die. Well, clearly there is some sense in which you should do the thing that you believe to be right. Because what other guide could you have, other than your own beliefs? Versus the externalist says well, if the right thing to do is kill the one and save the five, then that’s the right thing to do, what else is there to say about it?
Robert Wiblin: Yeah. Can you tie back what those different views might imply about how you would resolve the issue of moral uncertainty?
Christian Tarsney: The externalist, at least the most extreme externalist, basically says there is no issue of moral uncertainty. What you ought to do is the thing that the true moral theory tells you to do. And it doesn’t matter if you don’t believe the true moral theory, or you’re uncertain about it. And the internalist of course is the one who says well no, if you’re uncertain, you have to account for that uncertainty somehow. And the most extreme internalist is someone who says that whenever you’re uncertain between two normative principles, you need to go looking for some higher-order normative principle that tells you how to handle that uncertainty.
Robert Wiblin: What are the problems with those perspectives? And I guess, which one do you ultimately find more compelling?
Christian Tarsney: The objections to externalism usually start from just appeal to case intuitions. Suppose that actually it’s permissible to eat meat, but I have an 80% credence, on the basis of really good arguments, that it’s morally wrong. Clearly there’s something defective about me if I go ahead and do this thing that I believe to be probably seriously wrong, or something like that. You can also describe these cases where, for instance, what are called Jackson cases in the literature, where I know for sure that either A or B is objectively morally the best thing to do, but both of them carry a lot of risk, and there’s this other option C that’s nearly as good as A according to the theory that says do A, and nearly as good as B according to the theory that says do B, and so it just seems really intuitive and this is what expected value reasoning would tell you, that you should hedge your bets and choose C. Minimize your expected shortfall, or something like that. That’s one argument.
Christian Tarsney: Another argument is just to say well, most people who describe themselves as an externalist in this literature still think that people’s empirical beliefs make a difference to what they ought to do. If I think that this coffee cup might be poisoned, even if it in fact isn’t, I nevertheless shouldn’t drink from it, and I shouldn’t offer it to you. Clearly my empirical beliefs and uncertainties make a difference in what I ought to do. And then there’s the burden of proof on the externalist to explain what’s the difference between our empirical beliefs and our moral beliefs.
Robert Wiblin: I guess someone could try to reject that and say if the coffee isn’t poisoned, then you should drink it even if you think that it is poisoned. But then there’s something that’s kind of obtuse about that answer. It’s like, okay all right. I agree in some sense that’s true, but you’re not really grappling with the situation in which we really find ourselves in the real world. So what is the point of this deliberately point-missing statement?
Christian Tarsney: Yeah, the way that I think about this is, in any of these cases, or in, so for instance, the classic kind of empirical Jackson case where there’s a doctor trying to decide whether to treat a patient and she doesn’t know which condition the patient has. And there’s one drug that would perfectly cure condition one, another that would perfectly cure condition two, but they’re both fatal if the patient has the other condition. And then there’s a third drug that would nearly perfectly cure both conditions. And to me, the argument is something like if you were in fact in that doctor’s position, question one, what would you do?
Christian Tarsney: And obviously any reasonable person would prescribe the third drug. And then the second question is, do you think that that’s just an arbitrary, arational, maybe spasmodic thing that you’re inclined to do, or do you think that you’re somehow being guided by reasoning or guided by norms when you make that decision? And it just seems totally incredible to me to not concede that that decision is guided by norms or guided by reasoning. And I think you can say exactly the same thing in the normative case.
Robert Wiblin: That’s a problem with externalism. What are the weaknesses of the internalist view?
Christian Tarsney: Yeah. Probably the biggest weakness in my mind is, well, two things. One is that it’s vulnerable to the regress problem, where the most extreme internalist says if you are uncertain between two normative principles N1 and N2, you need a higher order normative principle. But then intuitively it’s not like once we get from first-order ethics to second-order ethics, now the clouds open and everything’s clear and we know what those principles are. There’s uncertainty and debate there too, and so we need third-order principles, and so on and so on and so on. And there are some felicitous conditions, like Phil Trammell has a nice paper that describes somewhat general conditions under which you can get this kind of nice convergence thing that happens as you go up to higher and higher order norms. But in the general case where you have credence in the full range of higher order theories that might seem reasonable, it seems like you just end up stuck and you’re never able to reach a kind of norm-guided decision. And so that looks bad.
Robert Wiblin: Okay. The issue is you need some principle to evaluate and aggregate your different underlying moral philosophies that could plausibly be true. So you need this principle of moral uncertainty, how do you evaluate those. But then you’re going to be uncertain about the different principles by which you would aggregate. You’ve got uncertainty about moral uncertainty and so you need some higher level principle to figure out how to aggregate at that level and on and on and on. It would be nice if like each level you went up, you kind of converged on some common view where regardless of the specifics of exactly what you believed you ended up at the same place. And I guess an alternative thing would be that you just got to some bedrock level where there was no moral uncertainty, and at that stage you could just say, “Well, now you just should do the thing that the correct theory says and you would stop the regress that way.” But it’s possible that neither of those two escape hatches is available.
Christian Tarsney: Yeah, exactly. Nobody, as far as I know, has thought about eleventh-order meta-norms, but maybe once we get up to the eleventh-order norms, it’ll just be obvious what the true eleventh-order norm is, and we can stop. But that doesn’t seem particularly plausible. That’s kind of a negative argument against internalism, that it leads you into this regress. And then I think there’s a positive argument for some kind of externalism, which says roughly, “Okay, what is the question we’re asking here?” Maybe we’re asking what’s the rational thing to do under uncertainty in general. And what we want is some theory, some criterion of rationality that says, “A choice is rational if and only if phi.” Where phi is some formula that might make reference to the agent’s beliefs, might make reference to all sorts of things, but that’s the criterion for whether you’re being rational.
Christian Tarsney: And then whatever that theory is, whatever the content of phi is, you can imagine an agent who doesn’t believe that theory, doesn’t believe that that’s the criterion of rationality, but nevertheless, insofar as that is the theory of rationality, well, that’s the theory of rationality. You’re rational if and only if phi, whether or not you believe that you’re rational if and only if phi. That’s just to say if there’s any kind of true theory at all about this normative concept we’re investigating, at some point it has to be a theory that you could disbelieve, but nevertheless it still applies to you.
Robert Wiblin: Yeah. What do you make of this regress problem? Is that a serious problem? Or might there be a resolution that will allow us to be internalists to some degree?
Christian Tarsney: Yeah, I have a paper under review on this exact question. The place that I come down is what I describe as a kind of moderate form of externalism. Roughly I think that the question we’re interested in is how to respond rationally to uncertainty. And there is going to be some basic principle of rational decision making under uncertainty, whether that’s maximize expected value or expected choice-worthiness, or whether it’s this stochastic dominance principle that ultimately that is just the criterion of rationality. And if you don’t believe it, nevertheless it’s the criterion of rationality and it applies to you. But that still allows that what you ought to do depends on your beliefs about your reasons, including your beliefs about the value of the possible outcomes of your options, and those beliefs or your uncertainty, that depends not just on your empirical uncertainties, but also on your first-order moral uncertainties.
Christian Tarsney: Basically where I end up is saying, rather than what the extreme externalists want to do and just say, “Empirical uncertainty, yes, normative uncertainty, no,” I think what we should say is when we’re asking a question about rationality, it’s empirical uncertainty, yes that matters. Moral uncertainty, yes that matters. But uncertainty about the principles of rationality, no, that doesn’t matter because whatever the principles of rationality are, they’re the principles of rationality and they determine whether an action is rational or not.
Robert Wiblin: Okay. So you’re going to have a mixed view where you’re going to be an internalist about empirical uncertainty, an internalist about moral uncertainty, but then an externalist about the basic principles of rationality. Is there a compelling reason you can give to take one view in some cases and the other view in the other cases?
Christian Tarsney: The argument is basically, well, those two arguments I just gave. Number one, it lets you avoid the regress problem. But number two, I think the more significant compelling argument is, insofar as we’re asking a question about rationality, the answer ultimately is going to be a criterion of rationality. And whatever that criterion turns out to be, that’s the criterion, despite the fact that people are capable of doubting or denying it. The reason that we go externalist about rationality is that we’re asking a question about rationality, right. Rather than if the question you’re asking is which option will produce the most value? Then, yeah…
Robert Wiblin: That’s an externalist question.
Christian Tarsney: Yeah. Right. That question doesn’t depend on your beliefs at all. It just depends on what the true moral theory is and the true state of the world. But when we’re asking a question about rationality, that just depends on the true theory of rationality plus what the true theory of rationality says is part of the criterion, which is your beliefs, probabilities about empirical questions, and normative questions.
Robert Wiblin: Okay. Yeah. That actually makes a whole bunch of sense to me. Do lots of other people accept this view?
Christian Tarsney: It’s not like I’ve had a flood of emails saying, “You’ve convinced me, there’s no more problem here.”
Robert Wiblin: Right, “You nailed it.”
Christian Tarsney: I think very few people will want to take the extreme externalist view that no moral or normative uncertainty ever should figure in our practical deliberations about what to do. And I think very many people also worry about the regress problem and do think that there has to be some norm where we pound our fists. I think something in the vicinity of this view, a lot of people would at least be open to.
Robert Wiblin: I guess some people approach this whole thing from a different angle where they don’t think of ethics as being these external, eternal truths that come with the universe, like, say, laws of physics. They think of it maybe as just a reflective equilibrium that they reach about their preferences, or a way of describing their personal values, rather than pre-existing truths that predate them. Should they still care about this issue of moral uncertainty? Does it make sense to talk about moral uncertainty there? It seems like that’s going to be related to this externalism/internalism thing.
Christian Tarsney: Yeah, I think that’s right. There’s a couple ways of taking that attitude. One option is to be what’s called a non-cognitivist about metaethics, where you think that ethical judgements actually just aren’t claims about the world at all. Not claims, for instance, that some actions have a property of objective rightness, rather they’re ways of expressing attitudes, for instance. The very simple hackneyed version of this view says that when I say that giving to the poor is right, that’s really just another way of saying hurray for giving to the poor. And if I say that kicking puppies is wrong, that’s just another way of saying boo, kicking puppies. There’s a lively debate about whether non-cognitivist people who take more sophisticated versions of that view can even accommodate the phenomenon of moral uncertainty in the first place.
Robert Wiblin: It’s like saying you’re cheering for the irrational soccer team to cheer for.
Christian Tarsney: Yeah. Well, and normally, well like boos and hoorays or attitudes of approval or disapproval more generally are not truth apt. They’re not the kinds of things that can be true or false. And so it seems like they’re not the kinds of things we can be uncertain about. We need a proxy for uncertainty. People generally think, well, there is the clear phenomenology of moral uncertainty. People feel uncertain about moral questions. And so even the non-cognitivist has to come up with some explanation for that or something that acts like uncertainty, even if it really isn’t. And then there’s also the kind of cognitivist, anti-realist where the hackneyed version of that view is for instance, something is right if it accords with a certain subset of my preferences. Exactly how you distinguish the moral preferences from other preferences is open for debate.
Christian Tarsney: But there again, I think you feel some pressure at least to account for the apparent phenomenon of people being uncertain about moral questions. One way that could be is if you think, well okay, my moral values are grounded in my preferences, but they’re something like, the preferences that I would have under a certain kind of reflection. I know that, for instance, sometimes I am inconsiderate of other people’s interests, but I kind of wish, if I sat down and reflected and thought about how my actions were affecting them, I know that I would form a preference to be more considerate. And so my actual deep-down moral values are those preferences I would have if I were sufficiently reflective and had time to think about it, and so forth. And so then that’s something I can be uncertain about. I can be uncertain what my preferences would be under that kind of reflection. And so maybe that’s a way in which these kinds of anti-realists should still be interested in moral uncertainty.
Robert Wiblin: I see. They’d be interested in moral uncertainty inasmuch as people find it hard to introspect and really reach some deep, fully-informed, reflective equilibrium about what is it that they value morally. They could think about it for a very long time and hear all the different arguments and so on.
Christian Tarsney: Yeah. A simple way to think about this is we probably all have the experience of doing things that we regret for kind of moral reasons, treating people badly and realizing afterwards, and wishing that we hadn’t done it. And maybe a part of what you’re doing is trying to avoid those regrets, avoid ending up in a situation where you feel bad about things that you’ve done. And you can’t predict in advance with perfect precision what things you’ll feel bad about. You might want to hedge your bets a little bit and say, “Well, this feels like the sort of thing I might have regrets about later on, and so I’m not going to do it.”
Robert Wiblin: Yeah. All of this moral uncertainty thinking, to what extent do the different leading approaches to it have different practical recommendations for what people involved in global priorities research or effective altruism ought to do, or recommend that other people do? Does it have much known practical relevance yet?
Christian Tarsney: Well, so there’s a couple of things to say here. One is, if you take these kinds of moral uncertainty/skeptical views — either you’re an externalist, or there’s this view that says, “Well, you should just act on the one moral theory that you think is most plausible” — then there are various arguments that trade on maybe small probabilities of kind of extreme moral theories being correct. Arguably the insects case is one example of that. If you think I have some small but non-zero credence that insects are morally considerable or morally statused and if they are, that’s so extremely important that it swamps everything else. Now it’s a little bit unclear whether that’s really moral uncertainty or just empirical uncertainty about whether insects have certain kinds of experiences or something. But something like that, of course if you’re not trying to hedge your moral bets, then you’re not going to find those arguments about low-probability moral considerations compelling.
Christian Tarsney: If you do want to treat moral uncertainty kind of like empirical uncertainty, and for instance do something like expected value maximization, I think unfortunately the kind of state of things at the moment is that we have a bunch of interesting theoretical ideas about how to respond to moral uncertainty, but they’re all just very, very difficult to apply in practice. For example, one of the ideas at the cutting edge here that Will MacAskill and Toby Ord and Owen Cotton-Barratt have developed is the idea of variance normalization. That you want to know how to make comparisons between the value scales of two theories. Well, what you should do is look at the value or choice-worthiness assignment that each theory gives to all of the options in some big set. And then you measure the variance of those two choice-worthiness assignments, and then you stretch or contract the scales so that each theory now has a variance of say one, or something. And then that tells you how to make comparisons between the scales.
Christian Tarsney: And there’s some interesting, potentially compelling theoretical arguments for doing things that way, but then to actually get practical implications out of that, you have to first of all make a list of potentially all the conceivable practical options that any agent might ever face and figure out the choice-worthiness of each of them according to a given moral theory. And then usually because that set is infinite, you need something called a measure that tells you how to weight different subsets of this infinite set. And then you need to actually calculate the variance. And all this before you actually try to apply it to the decision situation in front of you. My general impression at the moment is that there’s just this very big gap, and it seems like a bigger gap with respect to moral uncertainty than empirical uncertainty, between the kind of theoretical cutting-edge ideas and the practical use cases that we’d really like to apply these tools to.
Robert Wiblin: Alright. We’ve been at the philosophy for a couple of hours now, and I guess we’re heading towards the finishing line here. So I’d like to ask a slightly more practical question and see if any of your research has influenced the priorities that you have.
Robert Wiblin: So given all your research and everything you’ve learned over the last couple of years, and I guess over your entire career in philosophy… Imagining that you won the lottery and won billions of dollars and decided you wanted to spend it to improve the world as much as possible. What do you think your multi-billion dollar philanthropic foundation might look like? And maybe how, if at all, might it differ from Open Philanthropy?
Christian Tarsney: Yeah, I think unfortunately the boring answer is, I don’t think it would differ very much. I don’t have radically heterodox views within EA circles about what we should be prioritizing. I guess self-servingly, I think there’s a lot of stuff we don’t know the answer to, and so that multi-billion dollar foundation should spend a bunch of money on research. I think clearly existential risks should be pretty high on the priority list. And I think various things to do with norms, and institutions, and values. For instance, build better institutions for international cooperation on not just catastrophic risks, but problems like climate change, like non-extinction level pandemics, all the sorts of things that might make a long-term difference to whether humanity flourishes or doesn’t. None of that is terribly unconventional.
Robert Wiblin: Yeah. Unfortunately I think that’s my view as well. If I had any really heterodox ideas for Open Phil, I probably already would have told them.
Robert Wiblin: But let’s say if we were to come back and ask you in 10 years time and you gave a different answer because of things that you had learned in the intervening time, what do you think are some of the likely possible reasons that you might have really changed your mind?
Christian Tarsney: I think the most likely thing is just discovering a new way to really usefully spend money, other than existential risk reduction and research. And well, a lot of these norms and values things are just figuring out where philanthropic money or altruistically motivated agents can really get some traction, where there are leverage points for instance, in government. And it seems reasonably plausible to me that we’ll just learn quite a bit about that in the next 10 years or 20 years or something. And so there’ll be maybe something of the scale of existential risks where we would just want to be pouring a very substantial part of our budget into this.
Robert Wiblin: Yeah. That makes a bunch of sense. Are there any important philosophical questions that you think we might plausibly make good progress on in the next 10 years, or are they likely to be probably longer-term projects?
Christian Tarsney: Yeah, I think it’s always very hard, even retrospectively, to say whether we’ve made any progress in philosophy. I want to say that we have. I mean, particularly thinking about moral or normative questions, lots of things to do with human equality, the abolition of slavery, women’s equality, for instance. That has been genuine moral progress. And whether or not it’s been abstract philosophical arguments that ultimately moved the needle, certainly philosophers have had something to do with it. Women’s rights for instance, or Peter Singer with respect to animal ethics.
Christian Tarsney: So yeah, I guess I’m reasonably optimistic that moral philosophers have contributed to moral progress. Whether or not that’s from the discovery of abstract moral truths or something else. And then, yeah, I think there’s room for us to make progress. I feel cautiously optimistic about making progress in the short term on, among other things, these epistemic questions where there is, among other things, a philosophical angle. Thinking about inductive reasoning when we’re anticipating big structural breaks in the world or things like that. Reasoning under unawareness, where there are possibilities that we just haven’t even imagined.
Christian Tarsney: So yeah, I’m optimistic about progress there. And then in the longer term maybe making philosophical progress on things like consciousness, and being able to really answer questions like what are the physical or functional substrates of experience, and being able to say for sure whether insects or simple artificial intelligences or whatever have experiences.
The state of global priorities research [02:21:33]
Robert Wiblin: Okay. Let’s do a little update on the state of global priorities research for anyone in the audience who’s been listening for the last couple of hours and is thinking, “Damn, I’d like to do the kind of work that this guy is doing.” How is the field of global priorities research progressing? I think a couple of years ago it was a new name for an agglomeration of different research agendas, and the number of people involved was really pretty small. But my impression is that the number is growing in leaps and bounds. Is that right?
Christian Tarsney: Yeah, I think that’s right. So we are trying to keep our finger on the pulse, both by staying in touch with academics elsewhere who are doing work that we think of as at the core of global priorities research, but also, and more particularly, by kind of cultivating the pipeline of up-and-coming undergraduates, master’s students, and particularly PhD students who are interested in doing GPR. And that pipeline has just been very, very strong and very promising in both philosophy and economics. And particularly it’s gratifying to see in economics because well, GPR and effective altruism got a head start in philosophy through people like Peter Singer, Toby Ord, and Will MacAskill. And so I think economics is a few years behind in terms of just the number of people working on these questions around cause prioritization.
Christian Tarsney: But yeah, there’s a great pipeline of really smart PhD students doing really exciting work. And we’re I think having some success at getting established academics to work on these questions and think about them seriously as well. And in the longer term, thinking about branching out into other fields and trying to get people in, say, political science, or history, or psychology interested in these questions too.
Robert Wiblin: Yeah. What sort of folks in the audience might be a good fit for the sorts of vacancies that are coming up now and in the next couple of years?
Christian Tarsney: So at GPI and other places where global priorities research is happening at the moment, hiring is in philosophy and economics, and particularly for people who are finishing PhDs. So the obvious uninspiring answer is if you’re working on a PhD in philosophy or economics, you’d be a good fit for global priorities research in philosophy or economics. But there’s also, for people earlier on in the pipeline, for instance, GPI has been hiring pre-doctoral researchers, people just finishing up their undergrad, or maybe just finishing up a master’s. So anybody at that stage might want to think about applying for that.
Christian Tarsney: And then thinking in the longer term, I guess I think we do expect that there is a wider range of disciplines that have important things to contribute here. So I think maybe someone who is say an early undergraduate or in high school might also think about whether they have some interest or proclivity for some of these other fields, like quantitative historical research, like political science, building institutions that better serve the public good or something like that. And that might be a route to contributing as well.
Robert Wiblin: Yeah. What do you think is most distinctive about the office culture at the Global Priorities Institute?
Christian Tarsney: I mean, it’s hard to say, particularly now, because we haven’t had an office culture in the literal sense for a little over a year. I think compared to other academic research organizations, we really stress coordination and making sure that we’re thinking together about what the most important questions are, and that we’re directing our research energy towards those questions that we’ve identified as most important.
Christian Tarsney: So a failure mode that Hilary Greaves, who’s our director, has been very worried about and just really focused on avoiding is just everybody being kind of nerd-sniped by whatever random questions feel exciting, and so going off in maybe not the highest priority directions and not really coordinating and focusing on topics. And so we really try to maintain focus and mission alignment.
Christian Tarsney: I guess another thing that’s distinctive compared to other academic organizations is an emphasis on actuals or research collaborations. And particularly in philosophy, we do more co-authoring than the typical philosopher. And then finally, I guess I would say we’re just open to doing weird experimental stuff. Like we’ve spent a while trying to develop a somewhat arcane scoring system for potential research projects. And we go through scoring exercises to try to figure out what we want to do next. Yeah, we just are open to being weird and experimental in our culture in a way that I think academic organizations typically aren’t.
Robert Wiblin: Yeah. It’s not a super conformist group.
Christian Tarsney: Yeah.
Robert Wiblin: I guess for the people who aren’t doing an academic PhD but are interested in supporting the field, what kind of non-research roles do you also need? I’m guessing there’s possibly communications people, operations folks would also be really useful?
Christian Tarsney: Yeah. So at the moment we have a fairly robust operations team that varies in size depending on who you count, but three to five people who support GPI’s operations. And I think we’ve benefited from having people who are enthusiastic about GPI’s mission and believe in what we’re doing and understand and are interested in what the researchers are doing. And yeah, just in all sorts of ways provide really good and useful support. So certainly somebody who’s interested in working in operations in an EA organization, I think research organizations are one place where operations work can be really crucial and can make a big difference to whether researchers are able to be productive and answer the questions they’re trying to answer.
Robert Wiblin: Yeah. I guess for people who want to keep track of jobs at GPI, obviously you list them all on your website. And I think that we also list all of your vacancies on our job board. So if you sign up to our newsletter, you’ll get periodic updates when we update the job board.
Robert Wiblin: If people want to donate to fund more global priorities research, I guess there’ll be a guide to the research agenda for the Global Priorities Institute on your website. Are there any other options that people should possibly have in mind if they’re scouting out different opportunities?
Christian Tarsney: Yeah, I mean, you can donate directly to GPI, and of course we’d be very happy if people choose to do that. You can also donate to the EA Long-Term Future Fund. Which, if I’m not mistaken, funds global priorities research among other things. I suppose you could donate to the Forethought Foundation. I’m not sure if they accept donations, but I would imagine they do.
Christian Tarsney: I’m sure there are other options. There are lots of research organizations out there that are doing things in the vicinity of global priorities research, thinking about the long-run future, for instance, that are doing great work and would benefit from financial support.
Robert Wiblin: Yeah. I guess of course there’s a fair bit of direct or indirect global priorities research that goes on at Open Philanthropy, but having billions of dollars in the bank, I don’t know that they take any more donations. Or I guess they don’t take donations unless they’re on the scale of billions of dollars. So that probably cuts down the audience somewhat.
Robert Wiblin: Okay. Yeah. You’ve been super generous with your time. I had a couple of more personal questions to finish off.
Christian Tarsney: Sure.
Robert Wiblin: When I was doing some background research for this interview, I found some videos of you on YouTube, seemingly involved with very competitive debating. And it sounded like you’d been involved in competitive debating when you were younger and then gone on to start judging some debates. And I saw this crazy video of a Lincoln-Douglas debate in which people were just like… I mean, people say that I talk fast on this podcast, but these debaters were just like going blisteringly quickly through a series of arguments to the point where I could barely understand what they were saying. And I think you were a judge in one of these debates. What is going on with that? Are people learning good thinking or debating skills from these competitions?
Christian Tarsney: Yeah. So I was a competitive debater for a couple of years in high school and then I coached for many years after graduating, basically until I left grad school and thought I needed to make a break and focus on research. But yeah, in the United States in particular, competitive debate has gone in this very weird esoteric direction where the thing that’s most striking about it to outsiders is how fast everybody talks.
Christian Tarsney: And the reason for that is pretty straightforward, you have speech time. So you have six minutes for your first speech, and then the other debater has seven minutes for the next speech and so forth. And you just want to make as many arguments as you can in that period of time. And you have judges who mostly were competitive debaters before and so they learned to understand more or less imperfectly people talking at 300 or 350 words a minute. And so, yeah, you’re just able to say more things, and the judges are mostly able to understand it.
Christian Tarsney: In terms of whether it, you know, has pedagogical value… I had an amazing experience in high school debate myself, and an amazing experience coaching it. I guess I would say that competitive debate creates a kind of stylized form of argumentation, where for instance, often they are arguments that are bad, but you can make them very quickly and explaining why they’re bad takes a long time. And so it’s a good argument within the game of debate, right? Because it forces your opponent to waste a lot of their time explaining why your bad argument is bad.
Christian Tarsney: And I think there are debaters who understand things like this, or understand the limitations of the activity and understand that arguments that succeed in competitive debate aren’t necessarily good arguments. And those debaters can get an enormous amount out of the activity. One of the things that was really rewarding to me is you have high school students going out and reading well, not just Kant, and Locke, and Hobbes, and Mill, but reading contemporary philosophers, reading Nick Bostrom among other people, reading Christine Korsgaard. And in many cases they are really understanding and being very thoughtful and taking away a lot from it.
Christian Tarsney: But I think there also is, if you don’t recognize that it is a kind of stylized argumentative game and that the arguments that are succeeding in debate aren’t necessarily good arguments, and that there’s a higher level of academic rigor that you can ultimately aspire to, then I think it can be intellectually problematic.
Robert Wiblin: Yeah. Poor training for cases where you actually care whether you’re getting the right answer or not.
Christian Tarsney: Yeah.
Robert Wiblin: It seems like a weakness of the scoring system that you get lots of points just like really quickly kind of mumbling arguments that aren’t super persuasive, necessarily.
Robert Wiblin: One, you could have like a point scoring for rhetoric. So like, do people make their arguments in a compelling way that an ordinary person in the speech might find interesting or would be able to follow. And maybe also like, do the judges find the arguments to be compelling as stated, or can they themselves think of counter arguments that you haven’t addressed. Maybe that would return the speech style back to something that would be perhaps a bit more useful in ordinary life?
Christian Tarsney: Yeah. Well, I think it depends on what skill you’re trying to train. So if you’re trying to train public speaking, then certainly competitive Lincoln-Douglas debate or policy debate as it’s practiced in the United States isn’t the way to do that. It doesn’t teach public speaking skills. And there are other activities like, well, speech, like oratory, for instance, that do teach that.
Christian Tarsney: If you’re trying to teach argumentation, well then the fact that we don’t care how rhetorically elegant or persuasive a debater is serves that goal, because you only care about the arguments. And there’s a norm that a lot of debaters accept, or a lot of judges accept, of non-intervention. That I think this is a bad argument, but it’s the job of the other debater to explain why it’s a bad argument. I’m not going to step in and say that it’s a bad argument.
Christian Tarsney: And yeah, I guess my take is it does train argumentation and critical thinking skills, and thinking on your feet and intellectual creativity. As long as you recognize the limitations of what you’re doing and recognize that if you, for instance, go into academia or even when you’re an undergraduate in college and you’re engaged in kind of genuine truth-seeking, there are ideas and skills and knowledge that you can take from competitive debate that are useful there, but you’re not doing the same thing. And the things that work in competitive debate don’t always work in truth-seeking contexts.
Robert Wiblin: Yeah. Have you seen any people do really well at debating, but potentially learn bad epistemics, learn bad lessons about how to think and how to argue, because they’ve just learned this kind of persuasive, throw-out argument style, and that maybe holds them back in other lines of work?
Christian Tarsney: Yeah, I guess I would say the thing that I’ve more commonly seen is people who are very good at the game of debate and just aren’t particularly interested in the actual issues that are being debated. Or maybe they learn some philosophy, for instance, because they need it to win debate rounds, but they’re just not that interested in philosophy. And then if you’re not interested in the subject matter intrinsically, then you’re just not going to become good at thinking about it.
Christian Tarsney: And then there certainly are people who learn things in a superficial way for debate purposes, and don’t immediately recognize the limitations of what they learned or how superficial their understanding is. But I think there are also plenty of people who really do find the questions that they’re debating genuinely interesting and do want to go and learn and think about those questions independently of just trying to win debate rounds.
Robert Wiblin: Okay. I’ll stick up a link to that video, if people want to see the extremely fast-speaking debate style. I hadn’t seen that one before, despite doing debating at high school myself.
Robert Wiblin: Just another question I wanted to ask before we finish. What is your favorite or perhaps the philosophical problem or thought experiment that you think is the most fun? Maybe one that doesn’t necessarily have anything in particular to do with global priorities research.
Christian Tarsney: So a problem that, for whatever reason, I’ve always just found really compelling or fascinating is there’s this whole family of philosophical paradoxes around what are called self-reference, where the famous example is the liar paradox that says, “This sentence is false.” And there’s a thousand different paradoxes in this vicinity.
Christian Tarsney: So a puzzle that I’ve found particularly compelling since I encountered this in undergrad is what’s called the Berry paradox. And the paradox goes like this, there are some expressions in the English language that refer to numbers, for instance two plus two refers to the number four. And of course there are some expressions that don’t. And there’s a finite number of expressions in English of any given length, say fewer than 100 characters long when you write them down. And so there’s a finite number of say natural numbers that can be referred to by English expressions fewer than 100 characters long.
Christian Tarsney: So now consider the following number: the smallest number not named by any English expression of fewer than 100 characters. There should be some such number, right? One, okay, we can name that in fewer than 100 characters, two, and so forth, right? But we’re eventually going to encounter one that you can’t name in fewer than 100 characters. Except that expression, or at least the original Berry version of it, the smallest natural number not named by any English expression of fewer than 100 characters, is only 93 characters long. And so in fact there must be the smallest number not referred to by an English expression of fewer than 100 characters, but also that very number is referred to by an English expression of fewer than 100 characters, namely that expression that I just gave.
Robert Wiblin: That rules out that one, but then what about the next higher one? That’s now the new one that will be referred to you by this.
Christian Tarsney: Yes. But then that’s out, right? If that’s what that 93 character expression refers to.
Robert Wiblin: Yeah.
Christian Tarsney: I can’t say exactly why, but among all the self-reference paradoxes, that’s the one that always just blew my mind.
Robert Wiblin: We’ll stick up a link to maybe a Wikipedia article or another page about that self-reference paradox and maybe some others as well.
Robert Wiblin: Alright. My guest today has been Christian Tarsney. Thanks so much for coming on the 80,000 Hours podcast, Chris.
In case you don’t know, we at 80,000 Hours have an email newsletter which you might find useful.
If you subscribe, we’ll notify you when we update our job board, which lists hundreds of potentially high-impact job vacancies and stepping-stone roles that might help you have more impact in future.
The earlier you find out about these new opportunities, the less likely you are to miss an application deadline for a role you’d love to get.
We’ll also email you about our new research into potentially pressing problems and ways to solve them, as it comes out.
Emails go out to that list about 2–3 times per month.
You can join by putting in your email address here. If you’ve listened all the way to the end of an episode this intense, you seem like exactly the kind of person who’d enjoy being on the list.
Alright — the 80,000 Hours Podcast is produced by Keiran Harris.
Audio mastering for today’s episode by Ryan Kessler.
Full transcripts are available on our site and made by Sofia Davis-Fogel.
Christian Tarsney on future bias and a possible solution to moral fanaticism
This is a linkpost for #98 - Christian Tarsney on future bias and a possible solution to moral fanaticism. You can listen to the episode on that page, or by subscribing to the ’80,000 Hours Podcast’ wherever you get podcasts.
You can see more discussion of the episode in this Forum post.
In the episode, Christian and Rob discuss ‘future bias’ — that is, that people seem to care more about their future experiences than about their past experiences — as well as:
A possible solution to moral fanaticism, where you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome
How much of humanity’s resources we should spend on improving the long-term future
How large the expected value of the continued existence of Earth-originating civilization might be
How we should respond to uncertainty about the state of the world
The state of global priorities research
And much more
Key points
Practical implications of past, present, and future ethical comparison cases
Fanaticism
Stochastic dominance
The scope of longtermism
The value of the future
Externalism, internalism, and moral uncertainty
Articles, books, and other media discussed in the show
Christian’s work
Thank goodness that’s Newcomb: The practical relevance of the temporal value asymmetry
Future bias in action: Does the past matter more when you can affect it?
Exceeding expectations: stochastic dominance as a general decision theory
The epistemic challenge to longtermism
Other links
The big problem with the Apple Watch is that time is an illusion, Vox
A paradox for tiny probabilities and enormous values, GPI working paper by Nick Beckstead and Teruji Thomas
Fixed-point solutions to the regress problem in normative uncertainty, by Phil Trammell
Lincoln-Douglas-style high school debate (this one judged by Christian)
The Berry paradox
Other paradoxes of self-reference, Stanford Encyclopedia of Philosophy
Transcript
Rob’s intro [00:00:00]
Hi listeners, this is the 80,000 Hours Podcast, where we have unusually in-depth conversations about the world’s most pressing problems, what you can do to solve them, and whether or not the past actually exists. I’m Rob Wiblin, Head of Research at 80,000 Hours.
The Global Priorities Institute at Oxford University has led to some of our most popular episodes in the past, thanks to Hilary Greaves and Will MacAskill — and in this episode we’re back for more fundamental thinking about what matters most with their colleague Christian Tarsney.
I was slightly worried this episode would be a bit too technical, but Christian turned out to be a great communicator who was able to zero in on the parts of his papers that really matter to those of us trying to make the world a better place.
Most importantly, I think this episode may contain a real solution to the problem of fanaticism and Pascal’s mugging cases, which have in recent years been used to challenge the merit of using expected value to make decisions in high-stakes situations.
I came into this interview not really understanding Christian’s research, but left able to explain it to my housemates, which counts as serious progress in my mind.
As always, we’ve got links to learn much more on the page associated with this episode, as well as a transcript and summary of key points. If your podcasting software allows it, we also support chapters so you can skip to whichever part of the conversation interests you most.
Without further ado, here’s Christian Tarsney.
The interview begins [00:01:20]
Robert Wiblin: Today, I’m speaking with Christian Tarsney. Christian is a philosopher at Oxford University’s Global Priorities Institute where he works with previous 80,000 Hours podcast guests Hilary Greaves and Will MacAskill. He did his PhD at the University of Maryland on how to make rational decisions when you’re uncertain about fundamental ethical principles, and his research interests include ethics and decision theory, as well as effective altruism and political philosophy. He’s published papers on — among many other things — the use of discount rates for climate policy and our attitudes towards past and future experiences. Fun stuff. Thanks for coming on the podcast, Christian.
Christian Tarsney: Thanks, Rob. Great to be here.
Robert Wiblin: I hope to get to talk about moral fanaticism and epistemic challenges that people have made to longtermism. But first, what are you working on at the moment and why do you think it’s important?
Christian Tarsney: So broadly, I’m a researcher in philosophy at the Global Priorities Institute, and we are trying to build a field of global priorities research, which means thinking about how altruistically motivated agents should use their resources to do the most good — and more specifically, what causes or problems they should focus on. At the moment we’re focused on building that field in philosophy and economics and trying to recruit the tools of those disciplines to answer questions that we think are really important. We think this is important because if we can come up with better answers to these questions, then hopefully that’ll influence what people actually do when they’re deciding where to allocate their resources.
Christian Tarsney: I think as a philosopher, you always have this background worry, are we actually improving our understanding of anything or are we just spinning our wheels? But optimistically, I think we’ve made some progress and are continuing to make progress on the low-hanging fruit because not a lot of people have thought really explicitly about this question of how to use resources to do the most good and how to prioritize among the many things that seem important and pressing. More specifically, my own research interests at the moment… I have a few things on my plate, but the things that are really gripping me, number one are epistemic issues to do with predicting and predictably influencing the far future. So insofar as at least one of the most important things we want to do with our resources is make the world a better place in the very long term, we want to be able to predict the long-term effects of our actions.
Christian Tarsney: And we just have very little empirical information on our ability to predict or predictably influence the future on the scale of centuries or millennia. It’s hard to see how we could have that data. And so we have to do some a priori speculating or modeling to try to figure out how we can do this well. And then the second related question that I’m interested in is, well, suppose it turns out that we have a limited ability to predict the far future, but we have enough that in expectation the far future really matters, so we can make a big difference to the expected value of the far future. But most of that expected value comes from tiny probabilities of having enormous, really persistent effects. Should we just naively maximize expected value in those situations? Or are there some other decision rules that apply when we’re dealing with those extreme probabilities? So those are two problems that seem pressing from the standpoint of cause prioritization, and are also neglected and hopefully tractable with the tools of philosophy and economics.
Future bias [00:04:33]
Robert Wiblin: Beautiful. Alright. Yeah. We’ll return to all of these issues that you raised through the course of the conversation, and also check in on how the field of global priorities research is going later on, but let’s waste no time getting into an interesting philosophical issue that you’ve looked into into the past, which is called future bias. You’ve got two papers out on this topic, called Thank goodness that’s Newcomb: The practical relevance of the temporal value asymmetry and Future bias in action: Does the past matter more when you can affect it? First off, what is future bias, for people who are not familiar with it?
Christian Tarsney: Broadly, future bias or the bias towards the future or the temporal value asymmetry is this phenomenon that people seem to care more about their future experiences than their past experiences. And that means, among other things, that you’d prefer — all else being equal — to have a pleasant or positive experience in the future, rather than the past. And you’d prefer to have a painful or a negative experience in the past, rather than the future. So there’s a number of cases or thought experiments that illustrate this, but a famous one from Derek Parfit goes like this: Imagine that you’re going to the hospital for an operation. And the operation requires you to be conscious and it will be very painful, but they’ll give you a drug afterwards to temporarily forget about it. So when you wake up after the operation, you won’t immediately remember that it’s happened. And so you wake up in the hospital and you can’t remember whether you’ve had the operation. And you call the nurse and the nurse comes over and you say, “Have I had my operation yet?”
Christian Tarsney: And they look at the foot of your bed, where there are two different charts for two patients. And they say, “Well, you’re one of these two, I don’t know which one is you. One of these patients had a three-hour operation yesterday and it was very long and painful and difficult, but it was a complete success. And that patient will be fine going forward. The other patient is due to have a one-hour operation later today, which will be much less painful and also expected to turn out well and so forth.” And the question is which patient would you rather be? And most people have the intuition that you would rather be the patient who had the three-hour operation yesterday rather than the one-hour operation later today, because then the pain is in the past.
Robert Wiblin: Yeah.
Christian Tarsney: So what’s odd about this is of course, normally we prefer less pain rather than more pain. In this case, we prefer more pain just because the pain would be in the past rather than the future.
Robert Wiblin: Yeah. So that feels very intuitive. I think to most people that they’d rather have had bad experiences in the past than have bad experiences coming up. What’s problematic about it? Is there some tension between that and maybe like other beliefs or commitments that we have?
Christian Tarsney: Yeah. So a few arguments potentially can be made for the irrationality of future bias. One is just that the burden of proof is on the person who wants to defend or justify future bias to explain what’s the relevant difference between the past and the future such that we should care more about the one than the other. And it turns out that this is just surprisingly difficult to do. So you can contest that the burden of proof actually goes that way. But for instance, there’s this famous argument from Parfit called future Tuesday indifference. He says, “Look, just imagine someone who is normal in every respect, except that they don’t care about what happens to them on future Tuesdays. So if they can have a one-hour operation next Monday or a three-hour operation next Tuesday, they’ll opt for the three-hour operation just because it’s on a Tuesday.”
Christian Tarsney: And we clearly think there’s something normatively defective about that person. I think many of us would be inclined to say they’re irrational just because something’s on a future Tuesday. Why is that a reason to care about it less? So similarly, just because an event is in the past, why should we care about it less?
Robert Wiblin: Okay. I guess I feel like it seems very natural that humans would have this intuition or that we would have kind of evolved or learned this intuition because our past experiences having already happened and not really being changeable and not going to happen again, it seems like you can’t really have any causal effect on them. So to some extent it’s kind of water under the bridge and it makes practical sense to ignore the past? Or I mean, maybe learn from the past, but to ignore things that happened in the past because they’re not going to be able to affect them in the same way that they can affect something else that might happen in future. Is that a good enough reason not to worry about them? Or maybe is it that it’s a good reason to not worry too much about the future, but inasmuch as in these hypothetical odd scenarios that we paint where you can, in some sense, have an effect on the past, that those are the cases where you should worry about your intuition is getting polluted by this, like by the normal thing where the past is unaffectable?
Christian Tarsney: Yeah. So I think a lot of people do take the view that our inability to affect the past has something centrally to do with our indifference toward past experiences. And actually in this paper Future bias in action recently published by myself and some collaborators at the University of Sydney, we tried to test this experimentally. And we found that in fact, when you ask people to imagine hypothetical scenarios where they can affect their own past experiences, they care about their past experiences more, which suggests that your inability to affect the past is one reason why you feel indifferent to it.
Christian Tarsney: But at the same time, if we’re asking the normative question of should we be indifferent to the past, then there are various reasons to think that our inability to affect the past is not a reason to judge that our past experiences don’t matter as much as our future experiences. So for instance, if that were true, then you should similarly be indifferent to inevitable future experiences. If you know for sure that something bad is going to happen to you tomorrow, you shouldn’t care about it. And in fact, we don’t have that kind of attitude. So that seems like at least a kind of inconsistency.
Robert Wiblin: Yeah. If I recall from that experiment that you did, the unaffectability of the past explained part of people’s different reactions.
Christian Tarsney: Yeah.
Robert Wiblin: But then when you got rid of the, or you tried to equalize the unaffectability, then there was still some future bias present.
Christian Tarsney: Yeah, that’s right. So what we ended up concluding in that paper is there are probably multiple explanations for future bias. The other explanation that people have prominently proposed is that we care more about the future because we have the intuitive belief that we’re moving through time. In some sense, that’s hard to explicate, but we have this intuition that we’re moving away from the past and towards the future, and that your future experiences are ahead of you rather than behind you, and that makes it rational to care more about the future than the past.
Robert Wiblin: So it’s like time is kind of playing a videotape, and the things that haven’t played yet are still coming up. And so you can still experience that pain, whereas the stuff in the past is somehow irrelevant or just wiped off of the ethical picture somehow.
Christian Tarsney: Yeah, that’s right. I mean, it turns out to be just very hard to explain, well, first of all, this idea of moving through time or time having a direction or a flow, and then second to explain why that should make it rational to care less about the past than the future in a way that doesn’t just become a roundabout way of saying, well, the past is in the past and the future is in the future, but a lot of people do see an intuitive connection here, including me.
Philosophy of time [00:11:17]
Robert Wiblin: Yeah. Okay. It sounds like we might have to take a detour into the philosophy of time, or understand what different models people have of the nature of time and the present in order to dissect whether this idea makes any sense. You want to give an intro to that?
Christian Tarsney: Sure. So the central debate in the philosophy of time over the last 100 years or so is whether this idea of time moving or flowing or us moving from the past towards the future corresponds to any objective feature of reality. And this is a debate that’s also playing out, for instance, in physics. It’s something that our best physical theories maybe give us some indications one way or another, but don’t seem to settle, and you have physicists as well as philosophers on either side of this debate. And various arguments have been proposed either way, but well, the debate is still very much unsettled. And it’s also a little bit unclear exactly what the debate is about.
Christian Tarsney: So one thing, for instance, that people seem to disagree about, is the present moment, the ‘now.’ Is there one moment in time that’s objectively now, and that moves from earlier times towards later times? Or is it just that, for instance, the current time slice of me happens to be located at this location in time, and when I say ‘now,’ well ‘now’ just works like ‘here’ as a way of indicating the place in time where I happen to be located. So that’s one aspect of this debate that people try to get a handle on.
Robert Wiblin: Right. I don’t know that much about the philosophy of time, but I think my understanding is that there are three big theories that people put forward with different levels of plausibility. One is I think presentism, which you were describing, which is like, only the present instance is ‘actual,’ I think is the term that we use. I guess I’m not entirely sure what ‘actual’ means in this context, maybe that’s probably what people debate a lot. People are like, only the present instant is actual. Then you’ve got the ‘growing block’ theory of time, where all of the past exists or is actual because that has kind of been locked-in, because it’s already happened. And I guess the present instant exists as well, and that instant is just constantly being added to this recording of time that gets locked in. But in that one, the future isn’t yet actual.
Robert Wiblin: And then I guess you have eternalism, which is the idea that the past, the present, and the future are all actual to the same degree. It’s just that we happen to be like… My personal self happens to be passing through this instant, but all of them exist in some sense. And I guess on that view that there would be symmetry between things that happened in the past and things that happened in the future and how ethically weighted they are.
Christian Tarsney: Yeah, that’s basically right. But there are two separate debates here that are worth teasing apart. So one is about what philosophers called the ontology of time, so what moments in time or parts of time exist. And that’s the debate that you were describing. And if you’re a presentist or a growing block theorist, then you’re basically committed to the passage of time and the movement from the past to the future being in some sense objectively real. But if you take this other view, eternalism, you think the past, the present, and the future are all equally real. That doesn’t necessarily commit you one way or another on this debate about the passage of time. So you can still believe that the past or the future are real, but the present is still uniquely and objectively present. It has some special status. So there’s what people call the ‘moving spotlight’ theory, which says there is this eternal block of time, past, present, future events, all existing. But one moment in the block is illuminated at any given moment. And that’s the present.
Robert Wiblin: I see, interesting. I guess on the growing block model, where what actually exists in this ontological sense is kind of increasing as time passes, that would seem to suggest in some way that maybe you care more about the past, right? Because the past is kind of actual and locked in. Whereas the future is this ethereal thing that hasn’t happened yet. I guess maybe you could say there’s a symmetry if the future will happen. So at some point it will matter, but inasmuch as it’s uncertain, the past matters potentially even more.
Christian Tarsney: Yeah. This is something that philosophers have remarked on repeatedly, and one thing that people often say is kind of surprising, that nobody defends ‘shrinking block’ theory, that says the present and the future are real and the past isn’t. That would be a really neat explanation for why the future matters more than the past. But interestingly, we have on the one hand this very strong intuition that the future matters more than the past. And on the other hand, many people have the intuition that the past is real in a way that the future isn’t.
Robert Wiblin: So what kind of resolutions have people proposed to this? And how do they interact with people’s broader philosophical attempts to make sense of the nature of time?
Christian Tarsney: Yeah, well, so there’s an ongoing debate — as there usually is in philosophy — about whether the bias towards the future is rational or irrational. And maybe at a finer level of grain, whether it’s rationally required to care more about the future or rationally required to be neutral between different times, or you’re just rationally permitted to do whatever you want. And the latest set of moves in this debate have involved pointing out various ways in which whether you care about the past or not can affect your choices. So the obvious boring case is, well, what if there’s backward time travel? And you could actually retro-causally affect your past experiences? But there are other interesting cases. So for instance, if you are risk averse, then whether you’re biased towards the future or not can make a difference to your choices. Because whether one option is riskier or less risky than another can depend on whether you’re counting the stuff in the past that’s already baked in — and it might, for instance, be correlated in certain ways with what’s going to happen in the future.
Robert Wiblin: Another approach that one might take to this would be to reject what you were saying earlier, that the burden of proof is on the person who says that they care more about the future. And you might say, well, maybe this is just like, rather than being something that seems more irrational, like the future Tuesday case, where you just, for some reason that you can’t explain, don’t care about Tuesdays, this is more like a taste thing. Where it’s like, I like apples, but I don’t like oranges. We don’t think that you have a special burden of proof there. It’s more just a matter of taste, and a matter of personal preference. Is it plausible to run that line of argument? That it’s just like, personally, I just care about the future, and I don’t care about the past, and that’s just how I am and I don’t have to justify myself?
Christian Tarsney: I think that’s plausible. There are a couple arguments you could mount against it. So one question or complication is whether the bias towards the future also affects your other-regarding or altruistic preferences. So this is something people seem to have different intuitions about. Some people think that the bias towards the future is exclusively first personal. So when I’m thinking about other people’s experiences, people I care about, I don’t particularly care whether their pain is in the past or the future. You can manipulate people’s intuitions about this. So if you think about someone far away on the other side of the world, maybe it doesn’t seem to matter that much, whether their pain happened yesterday or tomorrow. But if it’s, say, your partner who you live with, you’ll feel better if they’ve already had their painful operation yesterday rather than today.
Christian Tarsney: And of course, if you are biased towards the future, at least in some sort of other-regarding altruistic cases, then it seems like there’s a kind of higher burden of justification. It can’t just be your personal preference that their pains be in the past rather than the future. There’s also the set of ways in which the bias towards the future might affect your choices. So for instance, if you’re biased towards the future and risk averse in a particular way, you can be money pumped. So you can make choices that will result in you being definitely worse off than you otherwise might’ve been. And you might think any pattern of preferences that allows you to be money pumped is ipso facto irrational, and not just a matter of taste.
Money pumping [00:18:53]
Robert Wiblin: Yeah. Can you explain this concept of money pumping? It shows up a lot in this discussion of ethics and decisions theory and rationality and so on, but I think probably not everyone has heard the idea.
Christian Tarsney: Yeah. So a money pump basically is a sequence of choices where an agent with particular dispositions will choose a series of options that leave them definitely worse off than some other series of options they might have chosen would have. So the classic example is if you have cyclic preferences. If I have apples and oranges and bananas, and I prefer an apple to an orange and an orange to a banana, and a banana to an apple, then, well, you can say, “I have an apple,” and you can say, “Well, I’ll trade you your apple for a banana if you pay me one cent.” And I take that deal because I prefer bananas. And then you say, “Well, I’ll give you an orange in exchange for that banana, if you give me one cent.” And similarly then I can get you to trade back for the apple, and you’ve gotten three cents out of me, and I’m just stuck with the apple that I had in the first place. So all sorts of patterns of preference can give rise to these sequences of choices that leave you definitely worse off.
Robert Wiblin: Yeah. Sometimes people would defend that it’s acceptable in some way to hold a position where you can be money pumped. Often in philosophy you face unpleasant trade-offs, you have to choose a position that has one weakness, or a position that has another weakness. And this is one of the weaknesses that a view might have, is that it’s vulnerable to money pumping. And it’s an undesirable property, but not necessarily a completely decisive one if every other option also has some unpleasant side effects.
Christian Tarsney: Yeah. I think that’s right. There’s plenty of debate about how decisive money pumps should be. I think one distinction that’s worth making is between what are sometimes called ‘forcing’ versus ‘non-forcing’ money pumps. So something like having incomplete preferences. If I prefer apples to bananas, but oranges are just incomparable to both, like I have no preference between apples or bananas and oranges, then it seems naively like it’s rationally permissible for me to make a series of choices that’ll leave me worse off, but it’s also rationally permissible for me to not do that. And you can say, well, there’s just an extra rule of rationality that says I shouldn’t do the sequence of things that will constitute a money pump. But in other cases, like the transitivity case, your preferences seem to commit you or force you to do the thing that leaves you definitely worse off. And it seems at least intuitively compelling that having preferences that force you or commit you to make yourself definitely worse off, that that’s at least a significant theoretical cost.
Robert Wiblin: Yeah. There’s something more seriously problematic there.
Christian Tarsney: Yeah.
Time travel [00:21:22]
Robert Wiblin: Okay. So we’ve discussed a couple of different approaches that people might take to resolve this issue, or a couple of different positions that people might take. How do people respond to a time travel case where you imagine a world where time travel is possible? You can go back into the past and change how things went, and then make people experience less suffering in the past. Does that tend to make a big difference to people’s attitudes, to how important the past is to them?
Christian Tarsney: So this is what we investigated in this paper Future bias in action and we found that it does, to some extent. So it doesn’t in aggregate make people perfectly time neutral, people still on average care more about the future than the past, but the asymmetry becomes weaker when you consider backward time travel cases.
Robert Wiblin: Yeah. Interesting. I guess it’s a bit hard to know how to concretize the time travel case, because you imagine like, okay, so you can go back in time and then run things again and have them go better. But then I’m like, does that mean it’s happened twice? Does it now get double value? Or am I erasing the original run-through and causing it not to have had any more or consequences? It almost raises as many questions as it answers.
Christian Tarsney: Yeah. Your theory of time travel definitely makes a difference here. You might think, well, if you think of backward time travel in a way where events, say, happened the first time around in the past, but then you can go back and erase them, there’s this additional question: Do the events that you erased still matter, or are they no longer part of the timeline? I think it’s fair to say that most philosophers are inclined to think that with time travel — insofar as it’s metaphysically possible — there has to be one consistent timeline. And so anything that you do if you go back into the past was already part of the past, but you might have limited information.
Christian Tarsney: So the case that we described in our experiment, for instance, you know that you were tortured for some period of time in the past, but you don’t remember exactly how long you were tortured or how many times you were subjected to an electric shock. And you have the opportunity to affect that retro-causally to determine whether you had 1,000 shocks or 1,010 shocks, or something like that. But you know that you’re not erasing the past, you’re just influencing what the past already was.
Robert Wiblin: Philosophers think that time travel, or I guess physicists think that time travel is kind of conceptually possible, or like, I guess I should say retro-causality is possible, but you need to have a self-consistent loop—
Christian Tarsney: Mm-hmm.
Robert Wiblin: —where the past affects the present which causes the present to cause the past. And then you’ve got a consistent series of causes that all fit together like puzzle pieces. I don’t know whether you want to explain the philosophy of time travel, but is that right?
Christian Tarsney: Yeah. I’m not particularly… I’m venturing a little bit outside my area of expertise, but general relativity has solutions that involve backwards time travel, where you have what are called closed timelike curves moving into their own past. But yeah, those solutions all involve one self-consistent timeline rather than, for instance, branching timelines, or erasing events that originally happened in the past or anything like that.
Robert Wiblin: Yeah. I think this comes up in not just philosophy, because there’s like some theories within physics of like at the subatomic level, you could end up with retro-causal stuff, and then you want to figure out well, is that self-consistent in a way? Or is that going to violate some other fundamental principle of physics?
Decision theory [00:24:36]
Robert Wiblin: Okay. Coming back to future bias though, let’s talk about the interaction between future bias and decision theory, which is something that you looked into. First off, for people who aren’t familiar, what is decision theory, in brief? If it’s possible to do this one in brief.
Christian Tarsney: Sure. So decision theory is the theory of how people either do or should make decisions. So descriptive decision theory studies how people do make decisions, normative decision theory studies how they should make decisions. There are a number of questions that decision theorists ask. So there’s no one question that centrally characterizes the discipline. One major question is how we respond to risk or uncertainty. So for instance, should we maximize expected value or expected utility, or are we allowed to be risk averse in ways that violate the axioms of expected utility theory? There’s also this famous debate between evidential decision theorists and causal decision theorists about how to act in cases where your choices give you some information about the pre-existing state of the world.
Robert Wiblin: Yeah. Is there a simple thought experiment that kind of elucidates the difference between evidential and causal decision theory?
Christian Tarsney: Yeah. So the classic case is called Newcomb’s problem. The idea is that there is a predictor who’s just very good at analyzing human motivations and predicting human choices. And the predictor presents you with the following choice: There are two boxes in front of you. One of them is transparent, and you can see it contains $1,000. The other box is opaque. And what the predictor tells you is that your options are either to take just the opaque box and get whatever’s inside there, or to take the opaque box and the transparent box together. But if I predicted that you would take both boxes, then I left the opaque box empty. And if I predicted that you would take only the opaque box, I put $1 million inside. So evidential decision theorists say, well, if the predictor is really that great, either they’re infallible at predicting my choices or they’re just very, very good, then if I take the opaque box that tells me that the predictor certainly or almost certainly predicted that I would do that, and put $1 million inside. So I end up with $1 million. Whereas if I take both boxes, then I’ll only end up with $1,000, because the predictor won’t have put the $1,000 inside.
Christian Tarsney: Whereas a causal decision theorist says, okay, but your choice makes no difference causally to whether there’s $1 million in the opaque box or not. There either is or there isn’t. And in either case, taking both boxes leaves you $1,000 richer than you would have been had you taken only the opaque box. So the rational thing to do is take both boxes.
Robert Wiblin: Yeah. I think a thought experiment that feels more intuitive and a bit less scifi to me is I think the smokers’ lesion problem, where, so we find out that like a large part of the reason why smokers tend to die young isn’t just that they’re smoking, it’s that there’s some correlation say genetically between people who are predisposed to enjoy smoking and have a compulsion to smoke and people who happen to have a genetic predisposition for having brain lesions that then can kill them later in life. And so in that case, you got this question, if you smoke, or if you find that you enjoy smoking and want to smoke and decide to smoke, that gives you evidence that you’re more likely to have this deadly brain lesion disease for some genetic correlation reason.
Robert Wiblin: But then should you take that into account in your decision on whether to smoke, it lowers your life expectancy, but not kind of causally through smoking. It’s just because smoking gives you evidence about something else about yourself. And it’s kind of a bit of a puzzle. Smoking lowers your life expectancy more than it does causally, and should you therefore use that? And that one is more intuitive because it doesn’t require anything that’s like really outside of what we’re used to experiencing.
Christian Tarsney: Yeah. The smoking lesion case, that’s the classic counterexample to evidential decision theory, because, well, I always find it hard to remember what the right intuition is supposed to be here, but most people intuit that the rational thing to do is to smoke, because it doesn’t cause cancer. It just gives you information that you’re more likely to have cancer. But there are some complications about the case that make it possible for evidential decision theorists to try to start to explain it in a way.
Robert Wiblin: I’ve got some links to decision theory stuff for people who are interested. What’s the interaction between future bias and decision theory that you’ve looked into?
Christian Tarsney: Well. So the particular connection that I’ve explored in this paper Thank goodness that’s Newcomb is that if you’re an evidential decision theorist, then whether you do or don’t care about the past can affect your choices in ways that don’t require exotic backwards time travel or retro causation or anything like that. So I imagined basically a variant of the Newcomb case where the predictor kidnaps you and subjects you to electric shocks for a period of time. And then they give you the option at the end to shock yourself one more time before you’re released, but they made a prediction in advance about whether you would choose to give yourself that final shock. And if they predicted that you would, they shocked you fewer times over the last week that they were holding you and torturing you. And if they predicted that you wouldn’t, they shocked you more times.
Christian Tarsney: And of course, if you’re an evidential decision theorist and you’re time-a neutral, you want to minimize the total number of shocks that you’ve ever experienced. And so you’ll choose to shock yourself now. But if you’re either a causal decision theorist or you’re biased towards the future, then you would not choose to shock yourself.
Robert Wiblin: Okay. What should we make of that?
Christian Tarsney: Well, one thing you could make of it is that this is one more case where evidential decision theory tells us something silly. And so we should be causal decision theorists. Of course, then you can rerun a similar case, which is basically what we did in this experimental paper, where you use backwards time travel rather than predictors to give people the option of affecting their own past experiences. And many of the same sort of philosophical issues come up.
Christian Tarsney: My take in the original paper was that our intuitions about the irrelevance or our indifference towards our past experiences don’t change very much when we’re considering these cases where we can ‘affect’ our past experiences, or our choices give us evidence about our past experiences. So my own take was that this undercuts the idea that the reason we don’t care about the past is because it’s practically irrelevant. But then this experimental paper that we did actually finds that people do change their intuitions or their judgements, at least on average in these cases. So my own philosophical take turned out to be undercut anyway, by our experimental results.
Robert Wiblin: Interesting. Yeah, I feel in that case, I have the intuition that you want to do the thing that reduces the total amount of electric shocks over all periods of time, which I guess is what you found other people felt at least to some degree. And I wonder whether there’s something that’s going on where it kind of depends on whether you’re thinking about it from the prudential selfish perspective of you at this instant in time, or whether you’re thinking about what would be a better world, all things considered. And it seems like what would be a better world all things considered is less torture in total. Maybe like what’s best for me right now is minimizing the amount of future torture that I’m going to experience. But then it seems like maybe we’re running up against a tension between our prudential perspective and then our ethical commitments. And this is creating a tension that we somehow have to resolve.
Christian Tarsney: Yeah, that could be. So if you think that we are generally time neutral when we’re thinking about other people, and then in this case, you can put on your impartial altruist hat when thinking about your past self and just treat them as another person that you’re concerned with, then maybe that is one reason why you would be more inclined to accept additional future pain to avoid a greater amount of past pain. But it is, as I mentioned earlier, non-obvious whether we’re generally time neutral when we’re thinking about other people. So one view you could take that’s not completely counterintuitive is well, the past as a whole is just dead and gone, not just my experiences, but other people’s experiences. And so what we should be thinking about as altruists is not making the world as a whole across all the time and space a better place, but making the future better, because that’s what’s still out there to be experienced.
Eternalism [00:32:32]
Robert Wiblin: Okay. I want to push on from this in just a second, but it sounded like earlier you were saying that the growing block theory and presentism and eternalism are kind of all still philosophically acceptable, and there are advocates for all of them in philosophy and physics. I kind of understood that there were some thought experiments that had made eternalism, the idea that the past, the present, and the future are all actual in some sense, to be a more dominant view, at least among physicists anyway. Have I misunderstood that?
Christian Tarsney: I think you’re probably right, that it’s more dominant among physicists, and probably even more dominant among philosophers, although all of these views still have active defenders. Maybe the most powerful argument that has convinced a lot of people is just that a naive picture of time where there’s an objective present moment in moving from the past towards the future requires that you be able to chop the universe into time slices in this objective way, where we can say all of these events, at all these different locations across the universe, those are the ones that are present right now.
Robert Wiblin: We’re simultaneous.
Christian Tarsney: Right. But special relativity teaches us that actually, whether two events at different locations are simultaneous with each other depends on basically how fast you’re moving. Right? So two people in motion relative to each other will disagree about which events are simultaneous. And so it looks — at least in relativistic physics — like there just couldn’t be a privileged plane of simultaneity, that all of those events are present and nothing else is.
Robert Wiblin: Yeah. I think that this shows up in ethics elsewhere when you’re thinking about the ethics of the future. Because you can end up with these funny cases where someone who cares less about the future, say, because you ask them like how much would you pay to prevent something terrible happening in 1,000 years? And they say, well, not very much, because it’s so far away in the future. Then you do something where it’s like, you send them away at almost light speed and then they can come back in what is to them only a few minutes or only a few hours, and then arrive in effect 1,000 years in the future in this other location. And then the terrible thing happens. And you’re like, what is the amount of time that’s passed? Because this all depends on the speed they were going at, and what you traveled. So if this is like only a few hours away from your perspective, does that mean that the 1,000-year thing doesn’t matter?
Christian Tarsney: Yeah.
Robert Wiblin: It introduces this peculiar kind of inconsistency.
Christian Tarsney: I think that’s one very good argument against what’s called ‘pure time preference.’ Thinking that the mere passage of time or mere distance in time has ethical significance.
Robert Wiblin: Alright. We’ll put up a link to those papers and people can explore more if they would like, I’m sure there’s plenty more in there. Is there anything that people should take away in their practical life and their decision making altruistically from these past, present, future ethical comparison cases?
Christian Tarsney: I think there’s two things that are worth mentioning. One is altruistically significant, which is, if you think that one of the things we should care about as altruists is whether people’s desires or preferences are satisfied or whether people’s goals are realized, then one important question is, do we care about the realization of people’s past goals, including the goals of past people, people who are dead now? And if so, that might have various kinds of ethical significance. For instance, I think if I recall correctly, Toby Ord in The Precipice makes this point that well, past people are engaged in this great human project of trying to build and preserve human civilization. And if we allowed ourselves to go extinct, we would be letting them down or failing to carry on their project. And whether you think that that consideration has normative significance might depend on whether you think the past as a whole has normative significance.
Robert Wiblin: Yeah. That adds another wrinkle that I guess you could think that the past matters, but perhaps if you only cared about experiences, say, then obviously people in the past can’t have different experiences because of things in the future, at least we think not.
Christian Tarsney: Yeah.
Robert Wiblin: So you have to think that the kind of fixed preference states that they had in their minds in the past, it’s still good to actualize those preferences in the future, even though it can’t affect their mind in the past.
Christian Tarsney: Yeah, that’s right. So you could think that we should be future biased only with respect to experiences, and not with respect to preference satisfaction. But then that’s a little bit hard to square if you think that the justification for future bias is this deep metaphysical feature of time. If the past is dead and gone, well, why should that affect the importance of experiences but not preferences? Another reason why the bias towards the future might be practically interesting or significant to people less from an altruistic standpoint than from a personal or individual standpoint, is this connection with our attitudes towards death, which is maybe the original context in which philosophers thought about the bias towards the future. So there’s this famous argument that goes back to Epicurus and Lucretius that says, look, the natural reason that people give for fearing death is that death marks a foundry of your life, and after you’re dead, you don’t get to have any more experiences, and that’s bad.
Christian Tarsney: But you could say exactly the same thing about birth, right? So before you were born, you didn’t have any experiences. And well, on the one hand, if you know that you’re going to die in five years, you might be very upset about that, but if you’re five years old and you know that five years ago you didn’t exist, people don’t tend to be very upset about that. And if you think that the past and the future should be on a par, that there is no fundamental asymmetry between those two directions in time, one conclusion that people have argued for is maybe we should be sanguine about the future, including sanguine about our own mortality, in the same way that we’re sanguine about the past and sanguine about the fact that we haven’t existed forever. Which I’m not sure if I can get myself into the headspace of really internalizing that attitude. But I think it’s a reasonably compelling argument and something that maybe some people can do better than I can.
Robert Wiblin: I feel like that’s easy to resolve because I’m just like, yeah, it’s terrible that I didn’t used to exist. It’s terrible that I was born as late as I was. I should have been born 1 billion years earlier and lived through the entire length of it, but there’s not much I can do about that. I can go to the gym and try to live longer, but I can’t go to the gym and try to be born earlier. So it’s kind of water under the bridge, yeah?
Christian Tarsney: Yeah. Right. That could be the conclusion you reach too.
Robert Wiblin: Alright. We’ll stick up links to those papers and people can dig in if they’d like to learn more.
Fanaticism [00:38:33]
Robert Wiblin: Let’s move on and talk about a problem in moral philosophy known as fanaticism. Yeah, what is the problem of fanaticism for those who are not familiar?
Christian Tarsney: Roughly the problem is that if you are an expected value maximizer, which means that when you’re making choices you just evaluate an option by taking all the possible outcomes and you assign them numeric values, the quantity of value or goodness that would be realized in this outcome, and then you just take a probability-weighted sum, the probability times the value for each of the possible outcomes, and add those all up and that tells you how good the option is…
Christian Tarsney: Well, if you make decisions like that, then you can end up preferring options that give you only a very tiny chance of an astronomically good outcome over options that give you certainty of a very good outcome, or you can prefer certainty of a bad outcome over an option that gives you near certainty of a very good outcome, but just a tiny, tiny, tiny probability of an astronomically bad outcome. And a lot of people find this counterintuitive.
Robert Wiblin: So the basic thing is that very unlikely outcomes that are massive in their magnitude that would be much more important than the other outcomes in some sense end up dominating the entire expected value calculation and dominating your decision even though they’re incredibly improbable and that just feels intuitively wrong and unappealing.
Christian Tarsney: Well, here’s an example that I find drives home the intuition. So suppose that you have the opportunity to really control the fate of the universe. You have two options, you have a safe option that will ensure that the universe contains, over its whole history, 1 trillion happy people with very good lives, or you have the option to take a gamble. And the way the gamble works is almost certainly the outcome will be very bad. So there’ll be 1 trillion unhappy people, or 1 trillion people with say hellish suffering, but there’s some teeny, teeny, tiny probability, say one in a googol, 10 to the 100, that you get a blank check where you can just produce any finite number of happy people you want. Just fill in a number.
Christian Tarsney: And if you’re trying to maximize the expected quantity of happiness or the expected number of happy people in the world, of course you want to do that second thing. But there is, in addition to just the counterintuitiveness of it, there’s a thought like, well, what we care about is the actual outcome of our choices, not the expectation. And if you take the risky option and the thing that’s almost certainly going to happen happens, which is you get a very terrible outcome, the fact that it was good in expectation doesn’t give you any consolation, or doesn’t seem to retrospectively justify your choice at all.
Robert Wiblin: Yeah. I think this can show up in other ways as well. One that jumps to mind is the dominant view among people who study this kind of thing is that insects probably aren’t conscious, and if they are conscious, they’re probably not very conscious. But we’re not super sure about that, so maybe there’s a 1 in 1,000 chance that insects are conscious to a significant degree. And there’s so many insects, it’s just phenomenal how many insects there are relative to how many humans, it’s a very, very large multiple. A fanatical position might be someone who says, well, I’m just going to maximize expected value, and I think there’s a 1 in 1,000 chance that insects are conscious to an important degree, and so I’m going to focus all my attention on trying to improve the wellbeing of insects. So this is one that doesn’t involve the time as much, but involves a change of focus based on a longshot possibility that something really matters even though it probably doesn’t.
Christian Tarsney: I think in that case too it seems counterintuitive to throw away, for instance, the opportunity for a very good outcome for this very tiny probability of a much better outcome. But then I think the other important thing — and maybe something that people under appreciate — is just that there isn’t any great, at least any widely accepted positive argument for the kind of risk-neutral expected value maximization that leads you to fanaticism. And in fact, the standard expectational theory of decision making under risk doesn’t force you to be fanatical in that way.
Robert Wiblin: Okay, interesting. Maybe let’s first lay out what is the case in favor of having a fanatical style of decision making where you’re just going to let that tail wag the dog?
Christian Tarsney: There’s a few arguments you could make. One route is just to defend risk-neutral expected value maximization. What that means is you have some way of measuring value that’s independent of your preferences towards risk. So for instance just to simplify, I care about the number of happy people and the number of unhappy people that ever exist, and so the value of an outcome is just say number of happy people minus number of unhappy people. And you might just think, well, the intuitive response to risk is to value outcomes in proportion to quantitatively how good they are, and multiplying that by probability and risk-neutral expected value maximization just feels right.
Christian Tarsney: There’s also more theoretical arguments you can give. So for instance, Harsanyi’s aggregation theorem gets you something like at least risk neutrality in the number of people who you can benefit to a given degree. But it requires you to accept some controversial premises like the ex-ante Pareto principle. So if you assume that each individual is an expected utility maximizer and you say if some option gives greater expected utility for each individual, then we should prefer it. There are various reasons why you might reject that.
Robert Wiblin: The underlying principle there is that if someone’s better off and no one else is worse off, then it’s going to be better. And I guess Harsanyi tried to do a bit of mathematical alchemy to convert that into a view that you should maximize expected value, which is to say maximize the probability of each outcome by the value of that outcome and then add all those up and then maximize the total.
Christian Tarsney: So it’s a little complicated, for instance because the Harsanyi theorem allows individuals to be very risk averse, for instance, with respect to years of happy experience or whatever. But what it does say, roughly, is well, if I can say benefit N individuals to a certain degree with probability P or I can benefit M individuals to that same degree with probability Q then which thing I should do is just determined by multiplying the number of people times the probability.
Christian Tarsney: There’s another way of justifying fanaticism that doesn’t depend on a commitment to risk-neutral expected value maximization. And this is something that Nick Beckstead and Teruji Thomas have explored in a GPI working paper that’s based on a part of Nick Beckstead’s dissertation. Roughly the argument is, well, look, suppose that I can have some good outcome with probability P, or I can have a much better outcome with let’s say we multiply P by some factor, like 0.99 or something, so I reduce the probability of the good outcome by 1% but I can increase how good the outcome is by an arbitrarily large amount.
Christian Tarsney: There must be some amount by which you could increase the value of the outcome such that you’d be willing to accept a 1% decrease in this probability. And if you think for any probability in any magnitude of goodness or value, you’re willing to accept that 1% reduction in probability for a sufficiently large increase in the magnitude of the payoff, then you just iterate that enough times and ultimately you’re preferring a tiny probability of a ridiculously good payoff to certainty of even potentially a very good payoff. So that allows you to be for instance risk averse with respect to value, but nevertheless you end up being at least in theory in principle vulnerable to fanaticism.
Robert Wiblin: Let’s take the other side now. How big are the other worries about fanaticism? Or what are the downsides, and what ways might we work around it?
Christian Tarsney: Well, to me the biggest reason to not be blithely fanatical or blithely maximize expected value is just that the arguments for it are only moderately compelling. So a thing that many philosophers and probably many people in the EA community probably misunderstand about standard decision theory is that the standard widely accepted theory of decision making under risk, expected utility theory, what it tells you roughly is that you should make choices in a way that can be represented as assigning numerical values to outcomes, and then multiplying those values by probability and maximizing the expectation.
Christian Tarsney: But it doesn’t tell you anything about how you should go about assigning those numerical values to outcomes, and it doesn’t tell you, for instance, if you have the independently given ethical scale like I care about the number of happy lives, even assuming that you should rank outcomes according to how good they are — so more happy people is always better than fewer happy people — nevertheless, you combine that with say the Von Neumann-Morgenstern axioms which are one of the standard formulations of expected utility theory, and the conclusion you get is just that you should maximize the expectation of some increasing function of the number of happy lives.
Christian Tarsney: But that increasing function could be, for instance, bounded above. So that the more happy people already exist, the less you care at the margin about an additional happy person. So that’s to say standard orthodox decision theory doesn’t force you to be fanatical, and the arguments that do force you to be fanatical, there are various ways that you can get off the bus.
Robert Wiblin: Okay. So the idea here is that the basic principles in decision theory and expected value theory that we usually think, well, we’re going to have to work with these, they say that if more happy people is good, it’s true then that twice as many happy people is going to be better than X as many happy people. However, it doesn’t show that it has to be twice as good. And that means because you can get declining returns on these larger and larger benefits, that you’re potentially going to be less vulnerable to weighting the largest outcomes in scale, because you can tamp them down by saying, well, maybe twice as many happy people is only 1% better as 1X as many happy people.
Christian Tarsney: Yeah, exactly. You could, for instance, maximize the expectation of the logarithm of the number of happy people if you wanted to, but that would still be vulnerable to fanaticism because that’s unbounded. But yeah you can have functions like logs that are concave, but yeah, that have a horizontal asymptote.
Robert Wiblin: To me it seems really intuitive, the idea that if something is good then twice as much of it as twice as good. It’s a little bit surprising to find out that that wouldn’t be a reasonably fundamental principle of rational decision making. Is there a way of making it intuitive why that doesn’t spill out of these kinds of axioms?
Christian Tarsney: Well, there’s maybe two things to say. One is in general we don’t think that twice as much of a good thing is twice as good. So money is the obvious example here. Of course if you get $1 billion tomorrow that would be life changing. If you get an additional $1 billion the day after that, that would be nice but it wouldn’t double the impact of that first $1 billion. So then you need some separate argument for why say happy lives behave differently than money. And maybe it seems intuitive to you, maybe the point is that we value happy lives intrinsically while we only value money instrumentally, or something like that. But at least it’s not automatic or axiomatic that anything that matters that we have some way of measuring how much of it there is, that the value of it has to scale linearly with the amount of it.
Robert Wiblin: The normal story there is that money is instrumentally valuable so it’s just useful as a means to an end. And assuming that my end was happiness, then I can’t buy twice as much happiness with $2 billion as I could with $1 billion. Maybe I could barely make myself any happier whatsoever, and so of course the second $1 billion isn’t equally as good as the first. But then with the thing that you terminally value, the thing that is valuable in itself, like happiness, if I could get twice as much happiness, that feels more intuitive that that is twice as good.
Christian Tarsney: Yeah. One way that you could respond is to say, well, maybe to some extent we value, say, individual happiness, or the existence of happy lives not just intrinsically but also instrumentally. Or because it’s constitutive of some greater good, like we want there to be a flourishing human civilization or something that. Or we want the universe to contain life and sentience and happiness. And once there’s enough life and happiness and sentience to satiate that need for the universe to contain happiness, then we care about additional increases in individual happiness or the number of happy people less or something like that.
Christian Tarsney: But then the other thing to say is, grant your argument, for instance, that twice as many happy people is twice as good. Then there’s a further question. If I can have one outcome for sure or another outcome that’s twice as good with say 51% probability, should I prefer the twice as good outcome with 51% probability? Even conceding that it’s twice as good, it doesn’t automatically follow that I should just multiply that by the probability.
Robert Wiblin: So you’re saying you don’t necessarily have to do linear expected value maximization to be rational on this view?
Christian Tarsney: Yeah. Well, the thing that the standard axioms of expected utility theory tell you is suppose that you… Well, this isn’t part of expected utility theory, but suppose ethics gives us a ranking of outcomes, so more happy people or more happiness is better, and we stick that in exogenously. And then we also say you have to satisfy these axioms like independence and continuity and transitivity and so forth.
Christian Tarsney: Then the conclusion that spits out is that you need some utility function that’s increasing in the total amount of value or number of happy lives or whatever such that more happy people has greater utility, but that doesn’t mean that… That function doesn’t have to be linear, just nothing in the axioms forces that to be linear. So at least just an appeal to those axioms or appeal to the normative authority of expected utility theory doesn’t get you that jump from twice as good to we should weight it twice as much when we’re multiplying by probabilities.
Stochastic dominance [00:52:11]
Robert Wiblin: To what degree does this solve this problem of fanaticism? Should people think like, “Oh, well this has dealt with this issue to a pretty large extent”?
Christian Tarsney: Well, I definitely don’t think that the problem is resolved. So my own take on fanaticism and on decision making under risk, for whatever it’s worth, is fairly permissive. A weird and crazy view that I’m attracted to is that we’re only required to avoid choosing options that are what’s called first-order stochastically dominated, which means that you have two options, let’s call them option one and option two. And then there’s various possible outcomes that could result from either of those options. And for each of those outcomes, we ask what’s the probability if you choose option one or if you choose option two that you get not that outcome specifically, but an outcome that’s at least that good?
Christian Tarsney: Say option one for any possible outcome gives you a greater overall probability of an outcome at least that desirable, then that seems a pretty compelling reason to choose option one. To give maybe a simple example would be helpful. Suppose that I’m going to flip a fair coin, and I offer you a choice between two tickets. One ticket will pay $1 if the coin lands heads and nothing of it lands tails, the other ticket will pay $2 if the coin lands tails, but nothing if it lands heads. So you don’t have what’s called state-wise dominance here, because if the coin lands heads then the first ticket gives you a better outcome, $1 rather than $0. But you do have stochastic dominance because both tickets give you the same chance of at least $0, namely certainty, both tickets give you a 50% chance of at least $1, but the second ticket uniquely gives you a 50% chance of at least $2, and that seems a compelling argument for choosing it.
Robert Wiblin: I see. I guess, and in a continuous case rather than a binary one, you would have to say, well, the worst case is better in say scenario two rather than scenario one. And the one percentile case is better and the second percentile case, the median is better, or at least as good, then the best case scenario is also as good or better. And so across the whole distribution of outcomes from worst to best, with probability adding them up as percentiles, the second scenario is always equal or better. And so it would seem crazy to choose the option that is always equally as good or worse, no matter how lucky you get.
Christian Tarsney: Right. Even though there are states of the world where the stochastically dominant option will turn out worse, nevertheless the distribution of possible outcomes is better.
Robert Wiblin: Okay. So you’re saying if you compare the scenario where you get unlucky in scenario two versus lucky in scenario one, scenario one could end up better. But ex-ante, before you know whether you got lucky with the outcome or not, it was worse at every point.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Okay. And so your view is a fairly narrow one, that we only need to take options that are stochastically dominant.
Christian Tarsney: Or that are not stochastically dominated.
Robert Wiblin: How is that different?
Christian Tarsney: Suppose I have three options, one, two, and three. It could be that for instance one stochastically dominates two, but three neither dominates nor is dominated by anything. And then, yeah, three isn’t stochastically dominant of anything else, but the important thing is that it’s not stochastically dominated. So there’s no other option that you say, clearly this is better than three. And that means three is permissible.
Robert Wiblin: Seems like in practice, and the world being so messy, there’s so many different potential outcomes with different rankings from 0% to 100% of luckiness that it’s going to be rare to find options that are stochastically dominated, or at least that there’ll be a wide range of options that aren’t stochastically dominated and so this could in the real world end up being a very permissive theory of what it is to make a rational decision.
Christian Tarsney: Yeah, that’s absolutely right. When you asked earlier, well, should we just think that, for instance, this point about what expected utility theory tells us, that this settles the problem with fanaticism? One reason not to think that is in effect, what standard expected utility theory tells you is just this stochastic dominance thing. It constrains your choices under risk up to stochastic dominance, but no further. And as you say, that’s just very, very, very permissive. For instance, if I can save one life for sure or 100 lives with probability 0.99, both the ‘it’s just stochastic dominance’ view and the ‘it’s just axioms of expected utility theory’ view say you’re permitted to do either thing, but intuitively that save 100 lives with probability 0.99 looks the better option.
Background uncertainty [00:56:27]
Robert Wiblin: Are there any arguments that we could make for fanaticism, or for the more linear maximize expected value view that might get us further than just saying you shouldn’t choose something that’s stochastically dominant? And that might be a bit closer to common sense in this kind of thing where you’re saying, well, a 99% chance of saving 100 lives that’s got to be better than a certainty of saving one life because it’s 99 people in expectation?
Christian Tarsney: Yeah, so the argument that I’ve been exploring in my work recently, and in particular in this working paper Exceeding expectations looks at what happens to the stochastic dominance criterion when you add in what you might call background uncertainty. Suppose that you’re a classical utilitarian, just for example. So you measure the value of an outcome by the total amount of say happiness minus suffering in the resulting world. And when you make a choice, you’re unsure about two things that we can separate out if we want to.
Christian Tarsney: One is the outcome of your choice. You can think of that as what happens in your future light cone in the part of the universe that you can affect. But you’re also uncertain about how much value there is in the universe to begin with, so in the past or in faraway galaxies or whatever. And it turns out that if you’re sufficiently uncertain about the amount of value that’s in the universe to begin with, then options whose local outcomes, the thing that happens inside your future light cone, has greater expected value — but isn’t in a vacuum stochastically dominant — it becomes stochastically dominant once you add in that background uncertainty.
Christian Tarsney: If you tried to model this numerically in a way that at least seems plausible to me, you get the conclusion that actually this very minimal stochastic dominance criterion, once you account for our background uncertainty about the amount of value in the universe, recovers most of risk-neutral expected value maximization. And for instance can tell us you should save the 100 lives of probability 0.99 rather than one life for sure. While still giving us an out in these extreme fanatical cases.
Robert Wiblin: Okay, interesting. Is there any way of giving an intuitive verbal explanation of why that is, that all of that background uncertainty ends up recovering something closer to just the normal maximize expected value?
Christian Tarsney: Yeah, I can try. For example, well, take that one happy person for sure versus 100 happy people with probability 0.99. If you’re just thinking about that choice in a vacuum and you imagine that there’s nothing else in the universe that you’re uncertain about, you can say, “Well, if I take the sure thing then I’m absolutely guaranteed that the total amount of value in the universe will be at least 1.” If the units are happy people in existence or something. Versus, “If I take the second option, I’m not sure that the universe will be at least that good.”
Christian Tarsney: But when you add in substantial background uncertainty, then you can no longer say that. Because even if you take the apparent sure-thing option, you’re no longer certain that the universe as a whole will have a value of at least 1. Because it could be that the rest of the universe, the part that you can’t affect, is already really bad. And then if you want to think about, okay, so there’s this threshold of 1, say I’m really interested in the universe having a value of at least 1, well, one way in which my choice could bring that about is that the amount of value in the universe to begin with is somewhere between 0 and 1, and then I add this extra unit that puts us over the threshold.
Christian Tarsney: But another way it could happen is I choose the riskier option, and it pays off, which happens with probability 0.99, and the amount of value in the universe to begin with was somewhere between −99 and 1. And so that extra 100 units of value now puts us over the threshold to have more than 1 total value in the universe.
Robert Wiblin: And it puts you over the threshold in far more scenarios, because there’s a wider range there.
Christian Tarsney: Exactly, right. So it is much more likely that the total amount of pre-existing value in the universe will be between −99 and 1 than that it’ll be between 0 and 1.
Robert Wiblin: And there you were using a strict cutoff of the boundary being 1, where it’s no good below and it’s good above, but we can extrapolate the same underlying idea to a wider range of possible outcomes where they have diminishingly increasing value.
Christian Tarsney: Yeah. And what stochastic dominance means is basically you could pick any number you want, 1 or −10, or 1,000, and exactly the same argument will work, that choosing the expectationally superior option will increase your overall probability of ending up with a universe that’s at least that good.
Robert Wiblin: Okay. Now I get it. Now I guess my objection is the other way round. This seems like such a strong argument that it might just bring back fanaticism, because you’ve gotten too close again to just the role of the maximize expected value view.
Christian Tarsney: Yeah, two things to say about that. One is I think if it does, then we’ve got actually quite a powerful argument for fanaticism, because the argument that you shouldn’t choose a stochastically dominated option just seems extremely compelling.
Robert Wiblin: It’s so powerful.
Christian Tarsney: Yeah. There are axiomatic arguments you can give for it including the standard axioms of expected utility theory that people find quite compelling. I think if it just turns out that the fanatical option in a lot of these real-world cases is stochastically dominant, then that’s a better argument than we had before for embracing fanaticism. One of the major motivations for this project is that this phenomenon of background uncertainty inducing stochastic dominance happens really easily when you’re thinking about moderate probabilities of medium-sized payoffs. And then when you holded fixed the expected value of an option but you get that expected value from smaller and smaller probabilities of larger and larger payoffs, as you do that Pascalian transformation on an option it takes more and more and more background uncertainty to make it stochastically dominant.
Christian Tarsney: And so you get this nice phenomenon where if you know what your background uncertainty is, your probability distribution about the amount of value in the universe to begin with, then you can actually set a threshold where you can say, “Well, if I have the option to produce one unit of value with certainty versus a tiny probability of an astronomically good outcome, no matter how astronomically good that outcome is, the probability has to be at least X for me to be compelled, rationally required to do it.” So 10 to the −10 or something, and below that I’m still permitted to do the fanatical thing but I’m also permitted to take the sure thing.
Robert Wiblin: I see. So it will get you fanaticism up until the point where the goodness of the outcome that’s necessary to try to prompt you to be fanatical gets large relative to the background uncertainty about all of the different scenarios of how well the entire future could go. And so once you start getting to universe-scale good outcomes, that’s no longer big relative to the underlying uncertainty that you had, regardless of your actions, about how well things could go. Because it’s now spanning the full range from the best to worst outcome.
Christian Tarsney: Yeah.
Robert Wiblin: And so the stochastic dominance argument no longer applies, and you have a sphere of permissibility.
Christian Tarsney: Yeah, yeah. Very roughly, you’re forced to maximize expected value in most cases where the outcomes that you’re looking at are smaller than, for instance, the interquartile range of your background uncertainty. So the difference between the 25th and 75th percentile of amounts of value there could be in the universe to begin with. One simple way of measuring how uncertain you are, if the local outcomes that you’re considering are much smaller than that, then you’re typically required, under some other conditions, particularly you need sufficiently heavy tails and things like that, but under those conditions you’re required in almost all cases to maximize expected value. But then when you’re dealing with outcomes that are a much larger, potentially, than that interquartile range, or the scale of your background uncertainty more generally, then the stochastic dominance requirement becomes a lot more permissive.
Robert Wiblin: Okay. This seems a very neat potential middle ground where it gets you quite a lot of fanaticism but not so much that it seems to really go off the deep end. But for an individual, it seems it might prompt you to be very fanatical, because we’re all tiny ants just adding little bits of sand to the hill. And so perhaps the effect any one of us in the decision can ever hope to make about the total goodness of the universe relative to the background uncertainty is minuscule, and so in practice maybe is going to spit out a fanatical answer most of the time except in very wacky cases.
Christian Tarsney: Well, unless you think that… Unless what you’re concerned with as an individual is making some tiny difference to the probability that humanity does or doesn’t cross some threshold that makes a difference. For instance existential risks. If you think that if I devote my career to trying to reduce biological risks, say I can individually reduce the probability of premature human extinction by 1 in 1 billion or 1 in 1 trillion or something like that. Then in some sense, you’re still an individual just making a small marginal difference, but that difference takes the form of changing the probability of an astronomically good or astronomically bad outcome. So in that case the stochastic dominance view combined with what I take to be reasonable assumptions about our background uncertainty might say that you’re actually permitted to go either way and opt for the sure thing, work on global poverty or something.
Robert Wiblin: I see. That’s because the probability is sufficiently low that… Or indeed the lower the probability the wider the range of permissibility, because the probability difference is so small relative to the background uncertainty?
Christian Tarsney: Yeah. I mean basically the way to think about it is, you’re getting lots of expected value from, say, reducing extinction risks, because the potential payoff if humanity becomes a grand interstellar civilization or something is so astronomical, say it’s 10 to the 52 happy lives or something. But what the stochastic dominance requirement under background uncertainty forces you to do is treat those increases in the total amount of value in the universe as linear up to roughly the scale of your background uncertainty. But if the scale of your background uncertainty is say 10 to the 15 or 10 to the 20 human lives, then you’re forced to regard ensuring the future existence of humanity as at least good to the degree 10 to the 20. But that’s a lot less than 10 to the 50.
Robert Wiblin: Now, maybe I’m going to seem nuts here. But it seems like one aspect of the background uncertainty is, say, whether there’ll be aliens that will arise at some point in the future and colonize some significant fraction of the universe in our stead, even if we go extinct. Or maybe there are aliens in the past, or aliens outside of the accessible universe somewhere else? Or maybe there’s a lot of uncertainty just about what is of moral value, and how much moral value can exist? Because we don’t know how valuable good experiences are, or how valuable justice is. And so in fact, the amount of uncertainty about the goodness of the future of all of the universe is larger even than what we can directly affect by guiding Earth-originating life.
Christian Tarsney: Yeah, I think that’s totally plausible. So if you think that the universe is really enormous and there are probably other civilizations out there, and however good our civilization might be, there’s at least a substantial probability that there are many, many civilizations already in the universe or far away in distant galaxies that are achieving that same level of value, and we don’t know exactly how much value that is, or how many of those civilizations there are, and so forth… Then, yeah, I think it’s totally plausible that the scale of our background uncertainty about the value in the universe could be many orders of magnitude greater than the potential value of human civilization. But this is the sort of thing where of course what practical conclusions you reach depends sensitively on what numbers you plug in, and this is all pretty subjective. So it’s hard to really pound your fist on the table and say our background uncertainty should have this scale rather than that scale.
Robert Wiblin: Okay, makes sense. Maybe to wrap up this section, what should listeners take away from this if they’re someone who has been trying to themselves grapple with this question of how fanatical to be in their expected value maximization choices, in like, do I work on existential risk or do I work on something with a high probability of benefiting people in the immediate term?
Christian Tarsney: Yeah. Unfortunately my own take, or the take that’s given by my view, is something like, well, it depends delicately on the numbers, and if you think that you can make, say, a 10 to the −10 difference in the probability of extinction, well, then it just depends on exactly how uncertain you are about the value of the universe, and so forth. But a little bit more philosophically, or maybe a little bit more helpfully, I guess, I would say number one it’s worth bearing in mind that it’s not just automatic and axiomatic and beyond dispute that you have to be a kind of naive expected value maximizer. There are good reasons to be skeptical of that. And at least I don’t think it’s unreasonable for somebody to, in sufficiently extreme cases, opt for the sure thing rather than just being led anywhere by any tiny probability of 10 to the 52 future lives or something like that.
Christian Tarsney: But then on the other hand, there are, insofar as this argument about stochastic dominance under background risk is compelling, it means that at least in a lot of ordinary cases where we’re not considering really extreme probabilities, actually the fanatical thing shouldn’t seem so counterintuitive or—
Robert Wiblin: Outlandish.
Christian Tarsney: Right, yeah. Because actually what you’re doing is in some sense quite safe. Whatever target you’re interested in, you’re increasing the probability that the universe as a whole reaches or exceeds that target.
Robert Wiblin: Okay, nice. What’s been the reception to this idea among philosophers? Has it been warmly received?
Christian Tarsney: I would say people have very different responses. I think people generally find the argument and the results interesting. Some people find the crazy view at least worth taking seriously, and other people don’t. I think a common objection that I’ve encountered and that I think is totally reasonable is, well, we have these intuitions — for instance that you should maximize expected value — in ordinary cases, but you’re not required to in these kinds of extreme fanatical cases. And maybe this kind of the stochastic dominance rule combined with our actual empirical background uncertainty is a decent extensional match for our intuitions, but it doesn’t seem super plausible that it really gets at the explanation for our intuition, right?
Robert Wiblin: Yeah.
Christian Tarsney: Because we’re not walking around thinking about our uncertainty about the amount of value in distant galaxies or something like that. So is it really a point in favor of this theory that it captures our intuitions if it’s not capturing them for the right reasons?
Robert Wiblin: Yeah, I guess it captures the conclusion, but for a reason that is not plausibly related to why we actually believe the thing that we believe.
Christian Tarsney: Yeah. Now, I mean, I want to argue, and this is something I don’t do in the existing working paper, and something I still need to work out in more detail, but it does seem to me that there’s actually more of a connection than you might initially think. For instance, if you’re making, say, just self-interested prudential decisions about your money, one good reason to maximize expected value when you’re making small bets is that you have lots of other uncertainty about the rest of your financial future. You’ve faced this long run of other financial choices, and that gets you probably not all the way to stochastic dominance — because for instance your uncertainty about your future income is not probably unbounded — but it means that a very wide range of risk attitudes will agree that you ought to do the expectation-maximizing thing.
Christian Tarsney: And I think people are plausibly sensitive to the fact that you face this long run of future choices, that you face other uncertainty, that adopting a policy of maximizing expected value is extremely likely to pay off more in the long run. I do think there’s some connection here, but I don’t have it fully worked out in my head yet.
Robert Wiblin: Whereas in one-off bets you can’t make that same argument about in the long run over many choices it’s necessarily going to pay off, or very likely to pay off.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Okay. I don’t want to introduce Pascal’s mugging here because that would require us to lay out Pascal’s mugging for those who don’t know it. But I guess, yeah, a savvy listener can think about how this might interact with the Pascal’s mugging case, and we’ll stick up a link to that thought experiment.
Epistemic worries about longtermism [01:12:44]
Robert Wiblin: Let’s talk now about challenges to longtermism that stem from us not being able to foresee, properly, the effects of our actions on the very long term. For those who are fresh to this topic, what’s the basic trouble here?
Christian Tarsney: Well, so longtermism is, very roughly, the view that what we ought to do in most choice situations — or most of the most important choice situations that we face — is mainly determined by the effects of our actions on the very far future. And the kind of simple, intuitive argument for longtermism is that the far future is just potentially vast. Scales are much greater than the scale of the near future, if you think human-originating civilization could exist for billions of years, or something like that. But there’s this countervailing effect that’s harder to quantify, which is, as we look further and further and further into the future, it gets harder and harder to predict not just what the future will look like, but what the effects of our present actions or interventions will be. And it’s not at all obvious that when you quantify this, the first factor, the scale of the far future, is larger or more powerful than the second factor, the predictability of our future and the difficulty of predictably influencing the far future.
Robert Wiblin: So, yeah, why think that we will be able to predict the consequences of our actions, or really anything that we care about, 100 or 1,000 years in the future? It’s hard enough to predict what effect the things I do are going to have in one month or one year, let alone that far out.
Christian Tarsney: Yeah. So, I think my own response to this at least, is that our ability to predict and predictably influence the future is a matter of degree. And one simple reason to not just throw our hands up and say, “Well, we can’t possibly predict the future more than 100 years in advance,” is, well, we can predict the future, and we can predict the effects of our actions, one year in advance. And well now think about two years, three years, and so forth, presumably it gets harder to predict the future, but it would be weird if there was some point where it just discontinuously went to zero. Where our ability to predictably influence the future went from non-zero and not that great, but we can have some predictable effect, to all of a sudden you can have no predictable effect at all.
Christian Tarsney: So I think the right way to think about this and model this is that our ability to predictably influence the far future decreases. And we want to understand exactly what that means and the rate at which it decreases. The other answer from a different angle is that we can plausibly have predictable effects on the very long-run future if we can have effects on the nearer future that are persistent. The most obvious example is if we can make the difference between humanity surviving or not surviving the next century, plausibly, if we survive the next century of the next millennium, our civilization has at least a non-trivial chance of persisting for many thousands of years, maybe millions of years, maybe even billions of years.
Christian Tarsney: And on the other hand, if we don’t survive the next century, it’s very plausible that no civilization is going to exist on Earth, maybe for the rest of time, certainly for millions and millions of years. And so all you need to be able to do to have a predictable effect on the very long-run future, or at least to have some non-trivial chance of an effect on the very long-run future, is to affect the medium-term future in ways that are persistent.
Robert Wiblin: Yeah. As you were talking about how the uncertainty gets bigger and bigger year after year… I wonder whether the rate at which the uncertainty increases kind of declines over time, because you think, imagine if I did something and it successfully had a positive impact one year from now, and two years from now, and three years from now, it’s possible that it will flip in the fourth year and will go to zero, or become negative. But it seems like the rate at which that kind of flipping thing happens should decrease the longer the effect has been positive. To some extent, the uncertainty about whether it was a good or bad thing might decline over time.
Christian Tarsney: Yeah, I think that’s probably right. We haven’t yet asked the question of how do you quantify the rate at which your ability to predictably influence the far future declines, but a natural way to quantify it is you want to put the world into some desirable state, call it S, rather than a not-desirable, or less-desirable state, not-S. And you want to know, if I can make the difference, say, in the next 10 years, or 100 years, or something between the world being in state S and not-S, how likely is it that my action will still be making the difference or still determining the state of the world 1,000 years or 1 million years or something from now? And the kind of straightforward way to model that is maybe it declines exponentially. So, maybe there’s a 1% chance that I can make the difference in the next 100 years.
Christian Tarsney: And then after that, for every century, there’s a 50% chance that something’s going to come along where my action no longer makes the difference, right? So, that would be the kind of constant rate of fall-off, or would produce, in effect, an exponential discount rate on the value of our interventions. But it could be that, for instance, the probability that some exogenous event comes along and spoils the effect of my intervention does decline with time. But it nevertheless declined slowly enough that this discounting effect is still quite significant, and really cuts into the expected value of trying to influence the far future.
Robert Wiblin: Yeah, I guess that could happen two different ways, or maybe these are just the same way, but if the positive effect has lasted 100 years, then maybe we’ve learned something, that that was a robustly positive intervention. And so it should be expected to be robustly positive in future centuries as well. Another thing might be that humanity could go through some transition to a far more stable state where things are less chaotic, and if they had a positive effect up until the beginning of that more stable situation, which we’re in, then it’s just going to persist being positive for as long as that stable situation persists after that.
Christian Tarsney: Yeah, that’s right. So, I mean, one way of arguing for longtermism in the face of these epistemic worries is to say, well, there’s at least a non-zero probability that, again, in the case of existential risk, that if our civilization survives the next 1,000 years, then all the dangers are behind us. We’ll be multiplanetary, there’s just no chance of these exogenous events coming along. And so we’ll survive until the heat death of the universe or something. And on the other hand, if we don’t survive, then maybe the rate at which life arises on planets is just so low that there’s essentially no chance that another civilization will ever replace us.
Christian Tarsney: And so at least if you have sort of, well, say, 1 in 1 billion credence in this hypothesis that says that the effects of avoiding or causing existential risk will be persistent until the heat death of the universe or something like that, then that’s enough to generate enormous amounts of expected value. But then we’re back to these worries about fanaticism, right? That so much of the expected value of existential risk reduction is coming from this maybe very improbable hypothesis of unlimited persistence.
Robert Wiblin: Okay. So you’ve got a paper called The epistemic challenge to longtermism. What do you hope to add to this discussion with that paper?
Christian Tarsney: Yeah, so the purpose of that paper basically is to do some relatively simple, and as I think of it, kind of preliminary modeling of the relative weight of these two competing forces. So the way that I imagined things in that paper is, as I described it a moment ago, we want to put the world into some more desirable state rather than a less desirable state. And there’s some chance that you can succeed in doing that in the medium-term future, say the next 100 or 1,000 years. And then what you want to know is, what’s the probability that that effect will persist for a given length of time? And you imagine that there are two kinds of exogenous events that could come along. One is negative exogenous events, where you managed to put the world into the more desirable state, but then 1 million years later, 10,000 years later, or something, some event comes along that puts the world into the less desirable state anyway. So for instance, humanity goes extinct anyway.
Christian Tarsney: And then the other possibility is, say we fail to put the world into the more desirable state maybe because we focus on the short term instead of focusing on existential risk, and we go extinct, but then nevertheless, some event comes along later, like another intelligent civilization arises on Earth. And so by 1 million years from now, civilization’s back in business or something like that. The goal of the paper is… I think this is inevitably a quantitative exercise, to figure out what these kinds of considerations do to the expected value of the far future. But I tried to bite off a little piece of this and say, well, let’s suppose we’re total consequentialists. So we grant some normative assumptions that are favorable to longtermism and let’s suppose we’re just happy to be naive, expected value maximizers.
Christian Tarsney: So, I’m setting aside these worries about fanaticism for most of the paper, but then I’m going to try to examine the set of empirical assumptions or empirical beliefs or worldview that you might have that’s kind of least favorable to longtermism, within reason. And that means, among other things, thinking that maybe there is some irreducible rate at which these exogenous events come along, like an ineliminable minimum chance of extinction per century that produces this kind of permanent exponential discount rate on the effects of our actions. And so the purpose of the paper is to kind of model, if humanity either is just existing in a good state for some indefinite period of time, or maybe we’re expanding, we’re settling more of the universe, but there’s also this ineliminable possibility of exogenous events coming along, does the expected value of attempts to influence the far future — for instance by existential risk reduction — still look so good in comparison to the expected value of attempts to improve the more short-term future?
Robert Wiblin: Yeah. Okay. So, yeah, earlier I was saying, well, if you make some positive intervention, then that might get washed away in future, but the rate at which it washes away probably declines over time. Or there’s some reason to think that might be true. And here you’re considering a relative kind of worst-case scenario where it’s the case that the rate of your actions being washed away in the future just remains constant. So every 100 years, the odds of something great that you accomplished just becoming irrelevant remains, say 1%, 10%, 15% or whatever it may be. In which case you get this geometric decay on the value that it provides over time. And then you’re thinking, well, in that fairly dismal scenario, is it still worth focusing on the very long term? Or do longtermist projects then get dominated by things that have a bigger impact on the immediate term?
Christian Tarsney: Yeah. That’s right. I’m trying to do two things. One is to develop a fairly general model where you can think about any longtermist intervention you want under the rubric of more-desirable state, less-desirable state, and how long does that effect persist. But then the case that I’m trying to actually address numerically is the expected value of existential risk reduction. And in that case, the conclusion that I reach, and I think this is all sort of… There’s an ineliminable level of subjectivity here, and so other people should take a crack at this and see what conclusions come to you, but the conclusion that I come to anyway, is that even when you make what look like the most pessimistic assumptions for longtermism within reason on these kinds of empirical questions, nevertheless, the expected value of existential risk reduction still looks quite good in comparison to short-termist alternatives.
Robert Wiblin: Okay. Yeah. Is there any way of kind of summing up all the empirical ingredients that go into that pie?
Christian Tarsney: Yeah. Well, the short answer is there’s a whole bunch of parameters in this model, each of which makes some difference, but probably the most important things you have to think about are, number one, how much of a difference can I make to the probability of the better outcome rather than the worse outcome? So humanity surviving rather than going extinct being realized in the medium term. Where what I do in the paper is a very kind of Fermi estimate-style thing where I just say, “Well, imagine that human civilization as a whole focused on nothing but ensuring its own survival, every waking minute of every human being’s day for the next 1,000 years, how much could we change the probability that we survive?” And I say, “Well, surely at least 1%”, and then, okay, what proportion of humanity’s work hours over the next millennium, say, can you buy for $1 million?
Christian Tarsney: And if you assume that the returns to existential risk reduction at least aren’t increasing… We assume typically things have diminishing returns, so, to be as pessimistic as possible, assume you have constant returns, and then that allows you to get a lower-bound estimate of how much you can change the probability of premature extinction. So then the second important empirical question is, how good do you think the survival of humanity would be? And that depends a lot on, are you thinking about a scenario where we just remained Earth-bound for say 500 million years until the sun gets too hot? Or a scenario where we expand into the universe, and so the value of human civilization is growing presumably cubically in time because we’re expanding in spatial dimensions?
Christian Tarsney: So, I consider both of those possibilities, and I try to make conservative assumptions about what the welfare of future people will be like. And then the final crucial piece is, what do you think the ineliminable long-term rate of exogenous events coming along is going to be? And again, in the spirit of just testing the robustness of longtermism and making the most pessimistic assumptions within reason, I say well, let’s take what look like the most pessimistic, reasonable assumptions about the next century. And probably the most pessimistic thing you could reasonably believe is that that represents the ineliminable long-term rate at which existential catastrophes come along. And that’s maybe, at the absolute outside, maybe it’s 1% a year. It’s probably less than that. So, what I conclude in the paper is, if you make sufficiently pessimistic assumptions about, for instance, the long-term rate at which existential catastrophes come along, ineliminably, you can come up with empirical assumptions that, if you really commit to them, will really cut the expected value of say existential risk reduction down to something trivial.
Christian Tarsney: So, for instance, if you think there’s an ineliminable 1% risk of existential catastrophe every year for the rest of time, right? But once you start counting for uncertainties about parameters in the model… For instance, okay, I have at least some credence that far-future civilization will be more stable or more secure than that, that the annual rate of existential catastrophe will be only say one in 10,000 or one in 100,000, one in 1 million or something like that. And maybe I’m skeptical that humanity will ever settle the stars, but I think there’s at least a one in 1,000 or one in 1 million chance that we will. And maybe I think there’s a one in 1,000 chance of these really utopian scenarios where we manage to just produce astronomically more happiness per unit resource or per star system than our current technology allows us to, and you only need a little bit of credence in those more optimistic assumptions to get the case for existential risk reduction back on track, at least within the framework of expected value maximization.
Christian Tarsney: But then you end up back at this point, which motivates my interest in fanaticism, where, honestly, I think it’s just quite hard to figure out, at least without a ton of subjective guesswork, how much does the expectational case for say working on existential risk reduction depend on these very extreme tail probabilities. But it’s at least prima facie plausible that it does. If you’re very skeptical about the scenario where far-future human civilization is extremely stable and expanding into the stars and producing enormous amounts of happiness — there are reasonable people who think that that’s just an outlandish possibility and assign it a small trivial probability — yeah, it could turn out that the expectational case for existential risk reduction is really driven by that trivial probability you assigned to what you see as an outlandish scenario.
Christian Tarsney: And so I think where this exercise leaves me is thinking, insofar as we’re happy to be expected value maximizers for whatever reason, the case for prioritizing at least existential risk reduction, maybe the long-term future more generally, looks pretty robust. But insofar as we have these residual worries about fanaticism, there is a kind of question mark, where those worries about fanaticism combine with our epistemic worries about the far future to produce some residual discomfort, potentially, with longtermism.
Robert Wiblin: So to kind of repeat that back, it sounds like if you’re someone who just really feels very unsure about what are the odds that a group of people really trying to reduce the risk of extinction are going to succeed, and if you’re really unsure about how large could human civilization become in future, or Earth-originating life, like maybe it can go to space, maybe it can’t, no idea, and if you’re also unsure about like, whether we’ll ever be able to achieve some kind of stable state where extinction is now very unlikely, then all of that uncertainty kind of means that there is some reasonable possibility that we will get to this stable, very positive and very big state. And so that uncertainty means that there’s a strong case for working to try to achieve that outcome by reducing the possibility of some catastrophe that would take us off track now.
Robert Wiblin: And to get around that, you kind of have to say, no, I’m really sure that we can’t reduce extinction now, and I’m really sure that we’re never going to achieve a stable state. And I know we’re never going to get off Earth, or we’re never going to leave the solar system. Things that, I guess, some people claim, but I don’t really know what the basis would be for being so confident about any of those claims, to be honest, no one of those three seems plausible to me really, but someone who was committed to those empirical views, they would have a strong case against potentially working on longtermism.
Christian Tarsney: Yeah, I think that’s right. I mean, I share your intuition, not from the perspective of any of this empirical stuff being my real expertise, but it does seem just very strange to me to not assign some substantial probability to humanity eventually settling the universe and living in ways that are radically different and maybe radically better than the way we live today. But part of the purpose of this exercise is to say, well, there are these smart, apparently reasonable people who really do find these scenarios outlandish, and they assigned at least most of their credence to the more kind of mundane Earth-bound scenarios, what the far future will look like. And should those people be longtermists, particularly when you throw these epistemic worries into the mix?
Robert Wiblin: Yeah, I guess it seems like you could combine this paper with the fanaticism one to get some kind of middle-ground thing where maybe you have to discount or chop off the most extreme biggest outcomes, because they would be large relative to the background uncertainty. But then maybe if you kind of bake this cake altogether, you end up with some moderately strong case or moderately robust case in favor of working on really longtermist projects.
Christian Tarsney: Yeah. This is all very back of the envelope, and subjective, but it seems to me, and I make some argument for this in the paper on stochastic dominance, that our background uncertainty is at least great enough that we should be fanatical or expected value maximizers roughly out to probabilities of like one in 1 billion, or something like that. And then in the context of these epistemic worries, if you have at least a one in 1 billion credence in these scenarios that permit extreme persistence where far-future civilization will be extremely stable, for instance, then that’s enough to not just make the expectational case for longtermism, but make the more robust case on the basis of mere stochastic dominance.
Christian Tarsney: So, yeah, I guess I would say if you have less than a one in 1 billion credence in the more optimistic high-persistence scenarios, or you have less background uncertainty than that argument presupposes, I would view that as unreasonable overconfidence. I think many people would. But, again, a lot of this comes down to subjective judgements about what reasonable probability assignments are.
Best arguments against working on existential risk reduction [01:32:34]
Robert Wiblin: Yeah. For people who are ethically inclined towards longtermism as a kind of practical, moral principle, what are the best arguments against working on things that look like existential risk-related or longtermist-related projects in practice?
Christian Tarsney: Well, I usually say for my own part, I’m fairly sold on the idea that existential risk should be high on the list of priorities for longtermists. And one reason for that is I think that that’s where we have the clearest argument for potentially extreme persistence. So when we’re thinking about other things like changes to institutions or norms or values, maybe those changes will persist for a very long time, but it seems much more plausible to think that they’ll eventually wash out, or they would have happened anyway, or something like that. But if you wanted to make the case for putting existential risk kind of lower on the list of longtermist priorities, the most straightforward argument is just to contest the assumption that the survival of humanity is very, very good in expectation.
Christian Tarsney: So, of course, you might think, well, for instance, if you’re the extreme case, if you’re something like a negative utilitarian, you think, well, we only care about minimizing suffering. And if humanity survives for a very long time, maybe all we’ll do is just spread suffering to the stars and that’ll be terrible. So, of course, that’s a reason for, well, not just not trying to minimize extinction risks, but maybe hoping that humanity goes extinct, or something like that.
Robert Wiblin: Or I suppose, perhaps trying to focus on preventing those worst-case scenarios, which might involve being kind of neutral on extinction perhaps, but focusing on how things could become negative in value.
Christian Tarsney: Yeah, that’s true, too. But even if you don’t think of yourself as a negative or a negative-leaning consequentialist, something that I think a number of longtermists believe is roughly that the modal case for where humanity survives is one where maybe things are better than break even in expectation, but we still achieve only a tiny fraction of our potential value. So maybe we have dysfunctional social institutions, or people never acquire the right values, or the true moral beliefs, or something like that.
Robert Wiblin: Or we’re just not ambitious enough.
Christian Tarsney: Yeah, yeah. Imagine, for instance, that the far future… By default, the modal scenario is kind of like human civilization today, where at least if you set aside worries about factory farms and wild animals, and just think about human beings, plausibly we’re like a little bit above break even, probably most people’s lives are worth living, but we’re not all ecstatic all the time or something. And maybe the modal scenario is that this continues, just with fancier technology. Then you might also think, well, there is this other possibility out there where we just achieve astronomical levels of happiness and value and in one way or another optimize the universe for value, and making even a very small change to the probability of a future optimized for value versus the modal mediocre future has greater expected value than reducing the probability of existential risk, which just for the most part increases the probability of that mediocre future.
Robert Wiblin: Yeah. That makes sense. I kind of think I believe that. And it’s maybe something we should talk about on the show a little bit more, that there might be more value in kind of getting people to raise their vision for how amazing the future can be, and not aspiring, merely to kind of survive and persist in what I would say is the kind of mediocre situation that we’re in now, where it’s like not even really clear whether there’s more good things than bad things in the world. Plausibly there is, and plausibly there’s not. But really what we should be aiming for is something where it’s just like, so astronomically clear that the universe is an amazing place and like everything that’s… The vast majority of stuff that’s going on is fantastic. I think, maybe among people I know, many people have that vision for something that’s extraordinarily astronomically good. But I think that’s not a mainstream cultural idea, and I would feel a lot better about the future if it were.
Christian Tarsney: Yeah, I think that’s right. And I do find it certainly plausible that longtermists should diversify their portfolio to some extent between increasing the probability of the survival of humanity versus increasing the expected value of human civilization, conditional on survival. But I guess one thing I’m inclined to think, and I don’t think I have any amazing arguments to back this up, but I’m inclined to think that insofar as putting humanity on the track towards utopian future, a future optimized for value, insofar as that’s tractable, insofar as that’s something we can do much about, it’s also something that’s probably likely to happen anyway. So if you’re a moral realist and you think that there are real-world truths out there, and those truths are discoverable, and that agents, when they apprehend the moral truth, are at least sort of asymmetrically motivated to do good things rather than bad things, then plausibly in the long run, good moral values will be discovered and their influence will propagate, and we’ll make our way towards utopia.
Christian Tarsney: And if you don’t think that, if you think our motivational systems are fine-tuned by evolution to do something other than pursue the good, e.g. to maximize reproductive success or something like that, and even if there is a moral truth out there, in the long run attempts to persuade people of the moral truth are not going to have a enormous global influence on people’s behavior because we’re all just going to, in the long run, be reproductive fitness maximizers or something like that… I guess the thought is that the path to utopia is either kind of inevitable or nearly impossible, or something like that.
Robert Wiblin: Oh, interesting. Okay. So this is kind of an argument that it’s not very tractable because there’s going to be strong underlying tendencies for people to adopt or not to adopt particular values and goals. And just trying to make moral arguments… Either they do work, in which case they’ll work at some point anyway, or they’re going to fall on deaf ears and it’s not going to work regardless of whether you personally try or not.
Christian Tarsney: Yeah. And I shouldn’t overstate the extent to which I believe this. I mean, I think there’s certainly… It’s not unreasonable to have some credence in a middle ground where actually we can make a difference. Particularly if you think values or motivations or something are going to get locked in at some point, maybe when we achieve superintelligence or something like that.
Robert Wiblin: Yeah. I mean, I’m not sure that I find this that intuitively probable. Just like looking around culturally at different civilizations, different cultural groupings, both like different ones that are around today and different ones that have been around throughout history… It seems like, yes, there are particular things that are in common, and are quite unusual not to have, but then with people’s discretionary budgets, the ways that they express themselves and express their values, you see quite a bit of variation in what people choose to do with that slack. And it’s influenced by philosophical arguments as well as religion and culture and tradition and all of that. And so it seems like maybe that stuff is difficult to shift around because that kind of culture is fairly persistent, but inasmuch as you think that you have a good argument and have persuaded some people, maybe other people will be persuaded if they hear it as well.
Christian Tarsney: Yeah. I mean, I find that perfectly plausible or at least worth entertaining, but I think there are two stories you can tell about the historical record that correspond to the two prongs of this dilemma I was describing. So one prong is the kind of moral progress story, where slowly, and unevenly, kind of in fits and starts, we’ve been inching our way towards the moral truth. And there’s plenty of diversity, number one because there’s a million ways to be wrong and only one way to be right. So insofar as we haven’t reached the moral truth yet, we have different moral errors that we’ve fallen into. And number two, insofar as different cultures are making progress along that path towards the moral truth at different rates or something like that, so, if some cultures at some point in the 19th century accept slavery and others don’t, well that’s moral diversity. But it doesn’t defeat the idea that we’re all ultimately progressing towards the moral truth that slavery’s bad, or something like that.
Christian Tarsney: And then the other story you can tell is the kind of hard-nosed Darwinian story where what’s happened in the last say 2,000 or 3,000 years is just that we’ve been out of evolutionary equilibrium in this weird way, where we have motivational systems that were fine-tuned for our ancestral environment. And suddenly we’re thrust into this new environment where our motivations could maybe go off in weird directions and there aren’t strong selection pressures because, well, we have things like an agricultural surplus, that means that even people making weird non-fitness-maximizing choices, their lineages can survive for a while. But in the very long term, maybe we’ll end up back in some Malthusian trap, or maybe we’ll end up with some evolutionary competition between artificial superintelligences or something. And then evolution will take back over, and you’ll just get motivational systems that are optimized for something like reproductive fitness.
Robert Wiblin: Yeah.
The scope of longtermism [01:41:12]
Robert Wiblin: Alright. Let’s push on to briefly discuss another longtermist-related issue, this essay that you and Hilary Greaves are working on at the moment about what you call the scope of longtermism, which is roughly how much of all of humanity’s resources could we or should we spend on improving the long-term future before the marginal returns on spending more diminish so much that spending any more on longtermism beyond that would be a mistake, or at least no better than what else we could do. I know you’re still in the process of thinking about this and talking about this and writing it up, but how are you analyzing the questions? And do you have any preliminary ideas?
Christian Tarsney: Yeah, so I would say we certainly haven’t reached any conclusions, and the purpose of the essay is more to raise the question and get other people thinking about it and do a survey of some possibilities. There are two motivations for thinking about this. One is that a worry that I think a lot of people have — certainly a lot of philosophers — about longtermism is that it has this flavor of demanding extreme sacrifices from us. That maybe, for instance, if we really assign the same moral significance to the welfare of people in the very distant future, what that will require us to do is just work our fingers to the bone and give up all of our pleasures and leisure pursuits in order to maximize the probability at the eighth decimal place or something like that of humanity having a very good future.
Christian Tarsney: And this is actually a classic argument in economics too, that the reason that you need a discount rate, and more particularly the reason why you need a rate of pure time preference, why you need to care about the further future less just because it’s the further future, is that otherwise you end up with these unreasonable conclusions about what the savings rate should be.
Robert Wiblin: Effectively we should invest everything in the future and kind of consume nothing now. It’d be like taking all of our GDP and just converting it into more factories to make factories kind of thing, rather than doing anything that we value today.
Christian Tarsney: Yeah, exactly. Both in philosophy and in economics, people have thought, surely you can’t demand that much of the present generation. And so one thing we wanted to think about is, how much does longtermism or how much does a sort of temporal neutrality, no rate of pure time preference, actually demand of the present generation in practice? But the other question we wanted to think about is, insofar as the thing that we’re trying to do in global priorities research, in thinking about cause prioritization, is find the most important things and draw a circle around them and say, “This is what humanity should be focusing on,” is longtermism the right circle to draw?
Christian Tarsney: Or is it maybe the case that there’s a couple of things that we can productively do to improve the far future, for instance reduce existential risks, and maybe we can try to improve institutional decision making in certain ways and other ways of improving the far future, well, either there’s just not that much we can do, or all we can do is try to make the present better in intuitive ways. Produce more fair, just, equal societies and hope that they make better decisions in future.
Robert Wiblin: Improve education.
Christian Tarsney: Yeah, exactly. Where the more useful thing to say is not, we should be optimizing the far future, but this more specific thing, okay we should be trying to minimize existential risks and improve the quality of decision making in national and global political institutions or something like that.
Robert Wiblin: There’s two things there. One is a demandingness issue, of should it be that we should spend almost all of our time trying to improve the very long-term future and hang the present? And maybe also it sounded like you were alluding to the idea that maybe there might, after we’ve spent some amount of resources doing things that are specifically for the long-term future, there might end up being quite a degree of alignment between stuff that makes the long term go well and things that make the present go well. Because the way to figure out how to improve decisions that we’ll make in the future with the institutions we have now is probably just to improve those institutions and make people more reasonable and informed and better able to make decisions now, and then hope that that will carry forward. And so doing things that would make things look better in 100 years or 1,000 years might just end up looking awfully similar to just trying to improve how things are being run today.
Christian Tarsney: Yeah, exactly. You could think of there being a spectrum between radical longtermism and subtle longtermism, where radical longtermism, for instance if you really took seriously the idea that it’s all about maximizing the growth rate, maybe so that we can start launching our space probes as soon as possible and minimize astronomical waste, get to all the stars before they vanish beyond the astronomical horizon… And so then the thing that we should be doing right now, the kind of longtermist thing to do, as you described, is making factories to make factories. Every waking moment should be about launching those first space probes as quickly as we can. That’s the radical longtermism.
Christian Tarsney: And then the more subtle longtermism says things like, “Well, the far future is very important. We don’t know exactly what challenges we’re going to face, what choices we’re going to have to make, so the best thing we can do right now is try to equip people 50 or 100 years from now to face those challenges and make those choices better.” And for instance, one way we can do that is by trying to improve things like social capital and social trust. Societies where there’s a higher level of trust have more effective institutions. People are more willing to make sacrifices for the common good. For instance, they’re more willing to wear masks during a pandemic, say, and that just has all sorts of unexpected payoffs in all sorts of situations. What we should really be doing is, for instance, trying to make existing societies more fair and just and equal in order to improve social trust and social capital, and that’s going to have all these downstream payoffs in terms of how we face these unexpected challenges.
Christian Tarsney: And there, that’s not to say that longtermism is false or that longtermism makes no difference in practice, because it might be, if we were just thinking about the next 100 years, we should be focused on factory farmed animals or something like that. Maybe thinking about the long-term future is shifting our focus between one of the things we might intuitively be doing and another, but the result is not this crazy, demanding…
Robert Wiblin: In a way that’s very recognizable.
Christian Tarsney: Yeah, exactly.
Robert Wiblin: Yeah. Yeah. Interesting. Do you have any kind of preliminary conclusions? Would you like to guess where you might come down on these?
Christian Tarsney: Yeah, the second thing. I think nothing that I’d want to call a preliminary conclusion yet. I guess my own intuition is in that subtle longtermist direction. That we should think of the future as very important, but also very unpredictable. And that means that most of what longtermists should be doing, apart from a few obvious cases like reducing existential risks, is trying to equip the next few generations to make choices better. And that involves mostly… I think this is a interesting question. If we want people 100 years from now to respond to challenges better, are the things that we’re going to do to achieve that end mostly things that will make people between now and then better off?
Christian Tarsney: Or maybe we should subject the next couple of generations to lots of adversity so that they’re forced to build character and learn how to confront… Maybe we should put them all in Hunger Games scenarios so that they can survive if humanity is on the brink or something. But intuitively I think more prosperous, more fair, more just, more equal societies are just likely to handle challenges better. And so there’s a kind of natural connection between trying to improve our ability to respond to challenges and just trying to improve the lives of people alive today and in the near future.
Robert Wiblin: I guess the kind of natural middle ground view is maybe humanity could spend $1 trillion each year on stuff that is quite targeted at the long term, things related to nuclear weapons and dangerous new technologies and building friendships between countries so they don’t go to war, and things like that. But then, after we’ve soaked up all that learning and returns, what’s left is stuff that is extremely recognizable, like trying to get governments to work better, and making better decisions, and generally making sure that people will know what’s going on in the world, and all this other stuff that we were kind of doing anyway, not maybe quite as much and we should, but it doesn’t look really at all peculiar.
Christian Tarsney: Yeah. I think that’s right. Although another thought that’s worth adding to the mix, and this is an observation that I’m stealing at least proximately from Carl Shulman, is that even that $1 trillion, the stuff that you’re spending on nuclear weapons and biological risks and so forth, it may not be that difficult to justify from a short-termist perspective. If you’re just thinking about the next 100 years, there’s already a pretty compelling case for worrying about nuclear weapons and biological risks and even artificial intelligence. Even there, maybe it is just longtermism reshuffling the list of the top 10 priorities.
Robert Wiblin: Converges with common sense. Yeah, or taking something that was the tenth priority and making it the fifth or taking something that’s the third and moving it to the first. This was all stuff that we really should have been focusing on if we were smart anyway. I probably should stop describing all of this stuff as peculiar because I’m not really sure that any significant fraction of people think that trying to prevent a pandemic or trying to prevent nuclear war actually is peculiar. Actually it is very common sense.
Christian Tarsney: Yeah, it seems in practice like one of those things where the big challenge is just to get people marching in the direction that everybody agrees is the direction to march.
The value of the future [01:50:09]
Robert Wiblin: Another quick thing that I know you’ve been thinking and talking about at GPI is about how large the expected value of the continued existence of human-originating or Earth-originating civilization might be. As I understand it, you’ve been looking at historical trends in how well the world is going and how well we’re cooperating and things like that. And maybe also thinking, is there a tendency towards things being good rather than bad because intelligent agents that are capable of dominating a planet are maybe more likely to go out and pursue the goals they have and try to make things better, rather than just go out and engage in wanton destruction? We probably wouldn’t last very long if things were like that. Obviously a very speculative area, but what considerations have featured prominently in those discussions?
Christian Tarsney: Yeah, I think those two threads that you’ve identified have probably been among the things that we’ve been most interested in. There is this kind of outside view perspective that says if we want to form rational expectations about the value of the future, we should just think about the value of the present and look for trend lines over time. And then you might look at, for instance, the Steven Pinker stuff about declines in violence or look at trends in global happiness. But you might also think about things like factory farming, and reach the conclusion that actually, even though human beings have been getting both more numerous and better off over time, the net effect of human civilization has been getting worse and worse and worse, as we farm more and more chickens, or something like that.
Christian Tarsney: I’ll say, for my part, I’m a little bit skeptical about how much we can learn from this, because we should expect the outside view, extrapolative reasoning makes sense when you expect to remain in roughly the same regime for the time frame that you’re interested in. But I think there’s all sorts of reasons why we shouldn’t expect that. For instance, there’s the problem of converting wealth into happiness that we just haven’t really mastered, for instance, because, well, we don’t have good enough drugs or something like that. We know how to convert humanity’s wealth and resources into cars. But we don’t know how to make people happy that they own a car, or as happy as they should be, or something like that.
Christian Tarsney: But that’s in principle a solvable problem. Maybe it’s just getting the right drugs or the right kinds of psychotherapy or something like that. And in the long term, it seems very probable to me that we’ll eventually solve that problem. And then there’s other kinds of cases where the outside view reasoning just looks clearly like it’s pointing you in the wrong direction. For instance, maybe the net value of human civilization has been trending really positively. Maybe humanity has been a big win for the world just because we’re destroying so much habitat that we’re crowding out wild animals who would otherwise be living lives of horrible suffering. But obviously that trendline is bounded. We can’t create negative amounts of wilderness. And so if that’s the thing that’s driving the trendline, you don’t want to extrapolate that out to the year 1 billion or something and say, “Well, things will be awesome in 1 billion years.”
Robert Wiblin: Yeah. I see. Interesting. I think it is quite possible perhaps that humanity has overall been negative because of all of the suffering that we’ve created in factory farming. And I guess other very negative places too, perhaps prisons, there’s an enormous amount of suffering there. And in these very specific locations that’s enough to outweigh the broader, mild good that the rest of us get. But then you would have to extrapolate that… If you do this extrapolation, you’re going to end up assuming that in 1,000 years time, we’re just going to have 1,000 times as many animals or something like that in factory farms, which just seems extremely improbable given that it’s such an already borderline-outmoded technology, so why on earth would you project that forward or that way?
Christian Tarsney: Right. The overall trendline is being driven by the one phenomenon, where that one phenomenon could just easily go away in 100 years, maybe for just boring technological and economic reasons. Again, it seems like extrapolating too far out in the future to me at least looks like a mistake.
Robert Wiblin: Yeah. And what do you think of the idea that we should think that agents that are smart enough to exist in huge numbers probably also are smart enough to satisfy their preferences and maybe do things that are moral rather than just things that are randomly good and bad?
Christian Tarsney: Yeah. I think there’s two possible arguments there. One is the idea that agents generally tend to pursue their own good, and the universal good is just something like the sum of individual goods. And so if maybe my actions tend to promote my good and just be neutral for everybody else’s good, and similarly for everybody else, all else being equal, you would expect the future to be good rather than bad because each agent individually tends to make their life good rather than bad. And maybe if we’re sufficiently good at communicating and coordinating and maybe as we get more intelligent, we’ll be able to bargain and trade more and more efficiently, and then maybe a civilization full of self-interested agents could—
Robert Wiblin: Do all right.
Christian Tarsney: Yeah. That relies on some assumptions, for instance, that you remain in a situation where there’s something like a kind of parity of power between most of the individuals you’re thinking about. Or maybe we shouldn’t say remain in that situation, because we were just talking about factory farming. That’s an example of… Maybe in the human economy you have a bunch of agents who are each mostly self interested, but they’re constrained by other people’s ability to do them harm or something, or constrained by a legal framework that they’ve all agreed to. And that means that they are collectively able to reach efficient outcomes but then there’s this other set of beings who are just totally powerless.
Robert Wiblin: They get screwed.
Christian Tarsney: Yeah, exactly. And so maybe the future will be like that. And so the fact that we’re all pursuing our own good is no guarantee that things will turn out well from the point of view of the universe. But then there’s this other thought that maybe we have some general motivation to pursue not just our own good, but The Good. One way of thinking about this is maybe there’s something natural about empathy, or at least something more natural about empathy than… Empathy is in some sense more natural than sadism. And certainly if you think that that tendency to care about the interests of other beings and their welfare, that that becomes stronger over time and maybe we become… Our moral circle expands, and maybe as we get richer and better able to satisfy our own needs, we’re more able to turn our attention to other people’s needs. Then that would be a reason to think maybe more generally or more robustly that the future will be good.
Christian Tarsney: But I think there’s something paradoxical about this, because on the one hand, it seems very strange to think that there is such a thing as The Good, there are real values out there, and they’re knowable, but there is no asymmetric tendency to… It’s just as possible to end up in a civilization where most people are actively motivated to pursue the bad instead of the good. It just seems intuitively obvious that there is such a thing as The Good, that we have some asymmetric tendency to pursue the good rather than the bad. And on the other hand, if you put on your hard-nosed scientist hat, that just feels crazy that there would be this… Well, you can imagine motivational systems that are optimized for anything. You can imagine an agent with any utility function you want. A reinforcement learning agent that has whatever conceivable reward function that they’re optimizing for, so why should there be written into the universe this law that more agents tend to be motivated this way than the other way?
Robert Wiblin: Yeah. I guess we need to bring in the evolutionary psychologists.
Moral uncertainty [01:57:25]
Robert Wiblin: This actually very nicely leads into the next section, which is going to be about moral uncertainty, which has been one of your main research interests over the years. We’ve talked about it a couple of times on the show, but can you just quickly recap the problem of moral uncertainty?
Christian Tarsney: Yeah, so a lot of effort in philosophy and economics and elsewhere has gone into thinking about how we should respond to uncertainty about the state of the world, about empirical questions. That’s most standard decision theory. But until recently, much less effort has gone into thinking about how we should respond to uncertainty about basic normative questions, about what things are good or bad. When people talk about moral uncertainty in this context, in this literature, what they mean is uncertainty about those fundamental value questions. Is the good that we should be pursuing happiness? Or preference satisfaction? Or human perfection, or something like that?
Robert Wiblin: Justice.
Christian Tarsney: Right.
Robert Wiblin: Equity.
Christian Tarsney: Yeah, or should we be maximizing the total amount of value in the world, or the average, or whatever? In roughly the last 20 years, philosophers have really started to take this seriously and try to extend standard theories of decision making under empirical uncertainty to fundamental moral uncertainty. And then also started having this debate about whether you should actually do that, and whether moral uncertainty is the thing that we should care about in the first place.
Robert Wiblin: Yeah. Two angles that people come at this puzzle with are called externalism and internalism. Can you explain what those two views are and how they relate to moral uncertainty?
Christian Tarsney: Yeah, so unfortunately, internalism and externalism mean about 75 different things in philosophy. This particular internalism and externalism distinction was coined by a philosopher named Brian Weatherson. The way that he conceives the distinction, or maybe my paraphrase of the way he conceives the distinction, is basically an internalist is someone who says that normative principles, ethical principles, for instance, only have normative authority over you to the extent that you believe them. Maybe there’s an ethical truth out there, but if you justifiably believe some other ethical theory, some false ethical theory, well, of course the thing for you to do is go with your normative beliefs. Do the thing that you believe to be right.
Christian Tarsney: Whereas externalists think at least some normative principles, maybe all normative principles, have their authority unconditionally. It doesn’t depend on your beliefs. For instance, take the trolley problem. Should I kill one innocent person to save five innocent people? The internalist says, suppose the right answer is you should kill the one to save the five, but you’ve just read a lot of Kant and Foot and Thompson and so forth and you become very convinced, maybe in this particular variant of the trolley problem at least, that the right thing to do is to not kill the one, and to let the five die. Well, clearly there is some sense in which you should do the thing that you believe to be right. Because what other guide could you have, other than your own beliefs? Versus the externalist says well, if the right thing to do is kill the one and save the five, then that’s the right thing to do, what else is there to say about it?
Robert Wiblin: Yeah. Can you tie back what those different views might imply about how you would resolve the issue of moral uncertainty?
Christian Tarsney: The externalist, at least the most extreme externalist, basically says there is no issue of moral uncertainty. What you ought to do is the thing that the true moral theory tells you to do. And it doesn’t matter if you don’t believe the true moral theory, or you’re uncertain about it. And the internalist of course is the one who says well no, if you’re uncertain, you have to account for that uncertainty somehow. And the most extreme internalist is someone who says that whenever you’re uncertain between two normative principles, you need to go looking for some higher-order normative principle that tells you how to handle that uncertainty.
Robert Wiblin: What are the problems with those perspectives? And I guess, which one do you ultimately find more compelling?
Christian Tarsney: The objections to externalism usually start from just appeal to case intuitions. Suppose that actually it’s permissible to eat meat, but I have an 80% credence, on the basis of really good arguments, that it’s morally wrong. Clearly there’s something defective about me if I go ahead and do this thing that I believe to be probably seriously wrong, or something like that. You can also describe these cases where, for instance, what are called Jackson cases in the literature, where I know for sure that either A or B is objectively morally the best thing to do, but both of them carry a lot of risk, and there’s this other option C that’s nearly as good as A according to the theory that says do A, and nearly as good as B according to the theory that says do B, and so it just seems really intuitive and this is what expected value reasoning would tell you, that you should hedge your bets and choose C. Minimize your expected shortfall, or something like that. That’s one argument.
Christian Tarsney: Another argument is just to say well, most people who describe themselves as an externalist in this literature still think that people’s empirical beliefs make a difference to what they ought to do. If I think that this coffee cup might be poisoned, even if it in fact isn’t, I nevertheless shouldn’t drink from it, and I shouldn’t offer it to you. Clearly my empirical beliefs and uncertainties make a difference in what I ought to do. And then there’s the burden of proof on the externalist to explain what’s the difference between our empirical beliefs and our moral beliefs.
Robert Wiblin: I guess someone could try to reject that and say if the coffee isn’t poisoned, then you should drink it even if you think that it is poisoned. But then there’s something that’s kind of obtuse about that answer. It’s like, okay all right. I agree in some sense that’s true, but you’re not really grappling with the situation in which we really find ourselves in the real world. So what is the point of this deliberately point-missing statement?
Christian Tarsney: Yeah, the way that I think about this is, in any of these cases, or in, so for instance, the classic kind of empirical Jackson case where there’s a doctor trying to decide whether to treat a patient and she doesn’t know which condition the patient has. And there’s one drug that would perfectly cure condition one, another that would perfectly cure condition two, but they’re both fatal if the patient has the other condition. And then there’s a third drug that would nearly perfectly cure both conditions. And to me, the argument is something like if you were in fact in that doctor’s position, question one, what would you do?
Christian Tarsney: And obviously any reasonable person would prescribe the third drug. And then the second question is, do you think that that’s just an arbitrary, arational, maybe spasmodic thing that you’re inclined to do, or do you think that you’re somehow being guided by reasoning or guided by norms when you make that decision? And it just seems totally incredible to me to not concede that that decision is guided by norms or guided by reasoning. And I think you can say exactly the same thing in the normative case.
Robert Wiblin: That’s a problem with externalism. What are the weaknesses of the internalist view?
Christian Tarsney: Yeah. Probably the biggest weakness in my mind is, well, two things. One is that it’s vulnerable to the regress problem, where the most extreme internalist says if you are uncertain between two normative principles N1 and N2, you need a higher order normative principle. But then intuitively it’s not like once we get from first-order ethics to second-order ethics, now the clouds open and everything’s clear and we know what those principles are. There’s uncertainty and debate there too, and so we need third-order principles, and so on and so on and so on. And there are some felicitous conditions, like Phil Trammell has a nice paper that describes somewhat general conditions under which you can get this kind of nice convergence thing that happens as you go up to higher and higher order norms. But in the general case where you have credence in the full range of higher order theories that might seem reasonable, it seems like you just end up stuck and you’re never able to reach a kind of norm-guided decision. And so that looks bad.
Robert Wiblin: Okay. The issue is you need some principle to evaluate and aggregate your different underlying moral philosophies that could plausibly be true. So you need this principle of moral uncertainty, how do you evaluate those. But then you’re going to be uncertain about the different principles by which you would aggregate. You’ve got uncertainty about moral uncertainty and so you need some higher level principle to figure out how to aggregate at that level and on and on and on. It would be nice if like each level you went up, you kind of converged on some common view where regardless of the specifics of exactly what you believed you ended up at the same place. And I guess an alternative thing would be that you just got to some bedrock level where there was no moral uncertainty, and at that stage you could just say, “Well, now you just should do the thing that the correct theory says and you would stop the regress that way.” But it’s possible that neither of those two escape hatches is available.
Christian Tarsney: Yeah, exactly. Nobody, as far as I know, has thought about eleventh-order meta-norms, but maybe once we get up to the eleventh-order norms, it’ll just be obvious what the true eleventh-order norm is, and we can stop. But that doesn’t seem particularly plausible. That’s kind of a negative argument against internalism, that it leads you into this regress. And then I think there’s a positive argument for some kind of externalism, which says roughly, “Okay, what is the question we’re asking here?” Maybe we’re asking what’s the rational thing to do under uncertainty in general. And what we want is some theory, some criterion of rationality that says, “A choice is rational if and only if phi.” Where phi is some formula that might make reference to the agent’s beliefs, might make reference to all sorts of things, but that’s the criterion for whether you’re being rational.
Christian Tarsney: And then whatever that theory is, whatever the content of phi is, you can imagine an agent who doesn’t believe that theory, doesn’t believe that that’s the criterion of rationality, but nevertheless, insofar as that is the theory of rationality, well, that’s the theory of rationality. You’re rational if and only if phi, whether or not you believe that you’re rational if and only if phi. That’s just to say if there’s any kind of true theory at all about this normative concept we’re investigating, at some point it has to be a theory that you could disbelieve, but nevertheless it still applies to you.
Robert Wiblin: Yeah. What do you make of this regress problem? Is that a serious problem? Or might there be a resolution that will allow us to be internalists to some degree?
Christian Tarsney: Yeah, I have a paper under review on this exact question. The place that I come down is what I describe as a kind of moderate form of externalism. Roughly I think that the question we’re interested in is how to respond rationally to uncertainty. And there is going to be some basic principle of rational decision making under uncertainty, whether that’s maximize expected value or expected choice-worthiness, or whether it’s this stochastic dominance principle that ultimately that is just the criterion of rationality. And if you don’t believe it, nevertheless it’s the criterion of rationality and it applies to you. But that still allows that what you ought to do depends on your beliefs about your reasons, including your beliefs about the value of the possible outcomes of your options, and those beliefs or your uncertainty, that depends not just on your empirical uncertainties, but also on your first-order moral uncertainties.
Christian Tarsney: Basically where I end up is saying, rather than what the extreme externalists want to do and just say, “Empirical uncertainty, yes, normative uncertainty, no,” I think what we should say is when we’re asking a question about rationality, it’s empirical uncertainty, yes that matters. Moral uncertainty, yes that matters. But uncertainty about the principles of rationality, no, that doesn’t matter because whatever the principles of rationality are, they’re the principles of rationality and they determine whether an action is rational or not.
Robert Wiblin: Okay. So you’re going to have a mixed view where you’re going to be an internalist about empirical uncertainty, an internalist about moral uncertainty, but then an externalist about the basic principles of rationality. Is there a compelling reason you can give to take one view in some cases and the other view in the other cases?
Christian Tarsney: The argument is basically, well, those two arguments I just gave. Number one, it lets you avoid the regress problem. But number two, I think the more significant compelling argument is, insofar as we’re asking a question about rationality, the answer ultimately is going to be a criterion of rationality. And whatever that criterion turns out to be, that’s the criterion, despite the fact that people are capable of doubting or denying it. The reason that we go externalist about rationality is that we’re asking a question about rationality, right. Rather than if the question you’re asking is which option will produce the most value? Then, yeah…
Robert Wiblin: That’s an externalist question.
Christian Tarsney: Yeah. Right. That question doesn’t depend on your beliefs at all. It just depends on what the true moral theory is and the true state of the world. But when we’re asking a question about rationality, that just depends on the true theory of rationality plus what the true theory of rationality says is part of the criterion, which is your beliefs, probabilities about empirical questions, and normative questions.
Robert Wiblin: Okay. Yeah. That actually makes a whole bunch of sense to me. Do lots of other people accept this view?
Christian Tarsney: It’s not like I’ve had a flood of emails saying, “You’ve convinced me, there’s no more problem here.”
Robert Wiblin: Right, “You nailed it.”
Christian Tarsney: I think very few people will want to take the extreme externalist view that no moral or normative uncertainty ever should figure in our practical deliberations about what to do. And I think very many people also worry about the regress problem and do think that there has to be some norm where we pound our fists. I think something in the vicinity of this view, a lot of people would at least be open to.
Robert Wiblin: I guess some people approach this whole thing from a different angle where they don’t think of ethics as being these external, eternal truths that come with the universe, like, say, laws of physics. They think of it maybe as just a reflective equilibrium that they reach about their preferences, or a way of describing their personal values, rather than pre-existing truths that predate them. Should they still care about this issue of moral uncertainty? Does it make sense to talk about moral uncertainty there? It seems like that’s going to be related to this externalism/internalism thing.
Christian Tarsney: Yeah, I think that’s right. There’s a couple ways of taking that attitude. One option is to be what’s called a non-cognitivist about metaethics, where you think that ethical judgements actually just aren’t claims about the world at all. Not claims, for instance, that some actions have a property of objective rightness, rather they’re ways of expressing attitudes, for instance. The very simple hackneyed version of this view says that when I say that giving to the poor is right, that’s really just another way of saying hurray for giving to the poor. And if I say that kicking puppies is wrong, that’s just another way of saying boo, kicking puppies. There’s a lively debate about whether non-cognitivist people who take more sophisticated versions of that view can even accommodate the phenomenon of moral uncertainty in the first place.
Robert Wiblin: It’s like saying you’re cheering for the irrational soccer team to cheer for.
Christian Tarsney: Yeah. Well, and normally, well like boos and hoorays or attitudes of approval or disapproval more generally are not truth apt. They’re not the kinds of things that can be true or false. And so it seems like they’re not the kinds of things we can be uncertain about. We need a proxy for uncertainty. People generally think, well, there is the clear phenomenology of moral uncertainty. People feel uncertain about moral questions. And so even the non-cognitivist has to come up with some explanation for that or something that acts like uncertainty, even if it really isn’t. And then there’s also the kind of cognitivist, anti-realist where the hackneyed version of that view is for instance, something is right if it accords with a certain subset of my preferences. Exactly how you distinguish the moral preferences from other preferences is open for debate.
Christian Tarsney: But there again, I think you feel some pressure at least to account for the apparent phenomenon of people being uncertain about moral questions. One way that could be is if you think, well okay, my moral values are grounded in my preferences, but they’re something like, the preferences that I would have under a certain kind of reflection. I know that, for instance, sometimes I am inconsiderate of other people’s interests, but I kind of wish, if I sat down and reflected and thought about how my actions were affecting them, I know that I would form a preference to be more considerate. And so my actual deep-down moral values are those preferences I would have if I were sufficiently reflective and had time to think about it, and so forth. And so then that’s something I can be uncertain about. I can be uncertain what my preferences would be under that kind of reflection. And so maybe that’s a way in which these kinds of anti-realists should still be interested in moral uncertainty.
Robert Wiblin: I see. They’d be interested in moral uncertainty inasmuch as people find it hard to introspect and really reach some deep, fully-informed, reflective equilibrium about what is it that they value morally. They could think about it for a very long time and hear all the different arguments and so on.
Christian Tarsney: Yeah. A simple way to think about this is we probably all have the experience of doing things that we regret for kind of moral reasons, treating people badly and realizing afterwards, and wishing that we hadn’t done it. And maybe a part of what you’re doing is trying to avoid those regrets, avoid ending up in a situation where you feel bad about things that you’ve done. And you can’t predict in advance with perfect precision what things you’ll feel bad about. You might want to hedge your bets a little bit and say, “Well, this feels like the sort of thing I might have regrets about later on, and so I’m not going to do it.”
Robert Wiblin: Yeah. All of this moral uncertainty thinking, to what extent do the different leading approaches to it have different practical recommendations for what people involved in global priorities research or effective altruism ought to do, or recommend that other people do? Does it have much known practical relevance yet?
Christian Tarsney: Well, so there’s a couple of things to say here. One is, if you take these kinds of moral uncertainty/skeptical views — either you’re an externalist, or there’s this view that says, “Well, you should just act on the one moral theory that you think is most plausible” — then there are various arguments that trade on maybe small probabilities of kind of extreme moral theories being correct. Arguably the insects case is one example of that. If you think I have some small but non-zero credence that insects are morally considerable or morally statused and if they are, that’s so extremely important that it swamps everything else. Now it’s a little bit unclear whether that’s really moral uncertainty or just empirical uncertainty about whether insects have certain kinds of experiences or something. But something like that, of course if you’re not trying to hedge your moral bets, then you’re not going to find those arguments about low-probability moral considerations compelling.
Christian Tarsney: If you do want to treat moral uncertainty kind of like empirical uncertainty, and for instance do something like expected value maximization, I think unfortunately the kind of state of things at the moment is that we have a bunch of interesting theoretical ideas about how to respond to moral uncertainty, but they’re all just very, very difficult to apply in practice. For example, one of the ideas at the cutting edge here that Will MacAskill and Toby Ord and Owen Cotton-Barratt have developed is the idea of variance normalization. That you want to know how to make comparisons between the value scales of two theories. Well, what you should do is look at the value or choice-worthiness assignment that each theory gives to all of the options in some big set. And then you measure the variance of those two choice-worthiness assignments, and then you stretch or contract the scales so that each theory now has a variance of say one, or something. And then that tells you how to make comparisons between the scales.
Christian Tarsney: And there’s some interesting, potentially compelling theoretical arguments for doing things that way, but then to actually get practical implications out of that, you have to first of all make a list of potentially all the conceivable practical options that any agent might ever face and figure out the choice-worthiness of each of them according to a given moral theory. And then usually because that set is infinite, you need something called a measure that tells you how to weight different subsets of this infinite set. And then you need to actually calculate the variance. And all this before you actually try to apply it to the decision situation in front of you. My general impression at the moment is that there’s just this very big gap, and it seems like a bigger gap with respect to moral uncertainty than empirical uncertainty, between the kind of theoretical cutting-edge ideas and the practical use cases that we’d really like to apply these tools to.
Christian’s personal priorities [02:17:27]
Robert Wiblin: Alright. We’ve been at the philosophy for a couple of hours now, and I guess we’re heading towards the finishing line here. So I’d like to ask a slightly more practical question and see if any of your research has influenced the priorities that you have.
Robert Wiblin: So given all your research and everything you’ve learned over the last couple of years, and I guess over your entire career in philosophy… Imagining that you won the lottery and won billions of dollars and decided you wanted to spend it to improve the world as much as possible. What do you think your multi-billion dollar philanthropic foundation might look like? And maybe how, if at all, might it differ from Open Philanthropy?
Christian Tarsney: Yeah, I think unfortunately the boring answer is, I don’t think it would differ very much. I don’t have radically heterodox views within EA circles about what we should be prioritizing. I guess self-servingly, I think there’s a lot of stuff we don’t know the answer to, and so that multi-billion dollar foundation should spend a bunch of money on research. I think clearly existential risks should be pretty high on the priority list. And I think various things to do with norms, and institutions, and values. For instance, build better institutions for international cooperation on not just catastrophic risks, but problems like climate change, like non-extinction level pandemics, all the sorts of things that might make a long-term difference to whether humanity flourishes or doesn’t. None of that is terribly unconventional.
Robert Wiblin: Yeah. Unfortunately I think that’s my view as well. If I had any really heterodox ideas for Open Phil, I probably already would have told them.
Robert Wiblin: But let’s say if we were to come back and ask you in 10 years time and you gave a different answer because of things that you had learned in the intervening time, what do you think are some of the likely possible reasons that you might have really changed your mind?
Christian Tarsney: I think the most likely thing is just discovering a new way to really usefully spend money, other than existential risk reduction and research. And well, a lot of these norms and values things are just figuring out where philanthropic money or altruistically motivated agents can really get some traction, where there are leverage points for instance, in government. And it seems reasonably plausible to me that we’ll just learn quite a bit about that in the next 10 years or 20 years or something. And so there’ll be maybe something of the scale of existential risks where we would just want to be pouring a very substantial part of our budget into this.
Robert Wiblin: Yeah. That makes a bunch of sense. Are there any important philosophical questions that you think we might plausibly make good progress on in the next 10 years, or are they likely to be probably longer-term projects?
Christian Tarsney: Yeah, I think it’s always very hard, even retrospectively, to say whether we’ve made any progress in philosophy. I want to say that we have. I mean, particularly thinking about moral or normative questions, lots of things to do with human equality, the abolition of slavery, women’s equality, for instance. That has been genuine moral progress. And whether or not it’s been abstract philosophical arguments that ultimately moved the needle, certainly philosophers have had something to do with it. Women’s rights for instance, or Peter Singer with respect to animal ethics.
Christian Tarsney: So yeah, I guess I’m reasonably optimistic that moral philosophers have contributed to moral progress. Whether or not that’s from the discovery of abstract moral truths or something else. And then, yeah, I think there’s room for us to make progress. I feel cautiously optimistic about making progress in the short term on, among other things, these epistemic questions where there is, among other things, a philosophical angle. Thinking about inductive reasoning when we’re anticipating big structural breaks in the world or things like that. Reasoning under unawareness, where there are possibilities that we just haven’t even imagined.
Christian Tarsney: So yeah, I’m optimistic about progress there. And then in the longer term maybe making philosophical progress on things like consciousness, and being able to really answer questions like what are the physical or functional substrates of experience, and being able to say for sure whether insects or simple artificial intelligences or whatever have experiences.
The state of global priorities research [02:21:33]
Robert Wiblin: Okay. Let’s do a little update on the state of global priorities research for anyone in the audience who’s been listening for the last couple of hours and is thinking, “Damn, I’d like to do the kind of work that this guy is doing.” How is the field of global priorities research progressing? I think a couple of years ago it was a new name for an agglomeration of different research agendas, and the number of people involved was really pretty small. But my impression is that the number is growing in leaps and bounds. Is that right?
Christian Tarsney: Yeah, I think that’s right. So we are trying to keep our finger on the pulse, both by staying in touch with academics elsewhere who are doing work that we think of as at the core of global priorities research, but also, and more particularly, by kind of cultivating the pipeline of up-and-coming undergraduates, master’s students, and particularly PhD students who are interested in doing GPR. And that pipeline has just been very, very strong and very promising in both philosophy and economics. And particularly it’s gratifying to see in economics because well, GPR and effective altruism got a head start in philosophy through people like Peter Singer, Toby Ord, and Will MacAskill. And so I think economics is a few years behind in terms of just the number of people working on these questions around cause prioritization.
Christian Tarsney: But yeah, there’s a great pipeline of really smart PhD students doing really exciting work. And we’re I think having some success at getting established academics to work on these questions and think about them seriously as well. And in the longer term, thinking about branching out into other fields and trying to get people in, say, political science, or history, or psychology interested in these questions too.
Robert Wiblin: Yeah. What sort of folks in the audience might be a good fit for the sorts of vacancies that are coming up now and in the next couple of years?
Christian Tarsney: So at GPI and other places where global priorities research is happening at the moment, hiring is in philosophy and economics, and particularly for people who are finishing PhDs. So the obvious uninspiring answer is if you’re working on a PhD in philosophy or economics, you’d be a good fit for global priorities research in philosophy or economics. But there’s also, for people earlier on in the pipeline, for instance, GPI has been hiring pre-doctoral researchers, people just finishing up their undergrad, or maybe just finishing up a master’s. So anybody at that stage might want to think about applying for that.
Christian Tarsney: And then thinking in the longer term, I guess I think we do expect that there is a wider range of disciplines that have important things to contribute here. So I think maybe someone who is say an early undergraduate or in high school might also think about whether they have some interest or proclivity for some of these other fields, like quantitative historical research, like political science, building institutions that better serve the public good or something like that. And that might be a route to contributing as well.
Robert Wiblin: Yeah. What do you think is most distinctive about the office culture at the Global Priorities Institute?
Christian Tarsney: I mean, it’s hard to say, particularly now, because we haven’t had an office culture in the literal sense for a little over a year. I think compared to other academic research organizations, we really stress coordination and making sure that we’re thinking together about what the most important questions are, and that we’re directing our research energy towards those questions that we’ve identified as most important.
Christian Tarsney: So a failure mode that Hilary Greaves, who’s our director, has been very worried about and just really focused on avoiding is just everybody being kind of nerd-sniped by whatever random questions feel exciting, and so going off in maybe not the highest priority directions and not really coordinating and focusing on topics. And so we really try to maintain focus and mission alignment.
Christian Tarsney: I guess another thing that’s distinctive compared to other academic organizations is an emphasis on actuals or research collaborations. And particularly in philosophy, we do more co-authoring than the typical philosopher. And then finally, I guess I would say we’re just open to doing weird experimental stuff. Like we’ve spent a while trying to develop a somewhat arcane scoring system for potential research projects. And we go through scoring exercises to try to figure out what we want to do next. Yeah, we just are open to being weird and experimental in our culture in a way that I think academic organizations typically aren’t.
Robert Wiblin: Yeah. It’s not a super conformist group.
Christian Tarsney: Yeah.
Robert Wiblin: I guess for the people who aren’t doing an academic PhD but are interested in supporting the field, what kind of non-research roles do you also need? I’m guessing there’s possibly communications people, operations folks would also be really useful?
Christian Tarsney: Yeah. So at the moment we have a fairly robust operations team that varies in size depending on who you count, but three to five people who support GPI’s operations. And I think we’ve benefited from having people who are enthusiastic about GPI’s mission and believe in what we’re doing and understand and are interested in what the researchers are doing. And yeah, just in all sorts of ways provide really good and useful support. So certainly somebody who’s interested in working in operations in an EA organization, I think research organizations are one place where operations work can be really crucial and can make a big difference to whether researchers are able to be productive and answer the questions they’re trying to answer.
Robert Wiblin: Yeah. I guess for people who want to keep track of jobs at GPI, obviously you list them all on your website. And I think that we also list all of your vacancies on our job board. So if you sign up to our newsletter, you’ll get periodic updates when we update the job board.
Robert Wiblin: If people want to donate to fund more global priorities research, I guess there’ll be a guide to the research agenda for the Global Priorities Institute on your website. Are there any other options that people should possibly have in mind if they’re scouting out different opportunities?
Christian Tarsney: Yeah, I mean, you can donate directly to GPI, and of course we’d be very happy if people choose to do that. You can also donate to the EA Long-Term Future Fund. Which, if I’m not mistaken, funds global priorities research among other things. I suppose you could donate to the Forethought Foundation. I’m not sure if they accept donations, but I would imagine they do.
Christian Tarsney: I’m sure there are other options. There are lots of research organizations out there that are doing things in the vicinity of global priorities research, thinking about the long-run future, for instance, that are doing great work and would benefit from financial support.
Robert Wiblin: Yeah. I guess of course there’s a fair bit of direct or indirect global priorities research that goes on at Open Philanthropy, but having billions of dollars in the bank, I don’t know that they take any more donations. Or I guess they don’t take donations unless they’re on the scale of billions of dollars. So that probably cuts down the audience somewhat.
Competitive debating [02:28:34]
Robert Wiblin: Okay. Yeah. You’ve been super generous with your time. I had a couple of more personal questions to finish off.
Christian Tarsney: Sure.
Robert Wiblin: When I was doing some background research for this interview, I found some videos of you on YouTube, seemingly involved with very competitive debating. And it sounded like you’d been involved in competitive debating when you were younger and then gone on to start judging some debates. And I saw this crazy video of a Lincoln-Douglas debate in which people were just like… I mean, people say that I talk fast on this podcast, but these debaters were just like going blisteringly quickly through a series of arguments to the point where I could barely understand what they were saying. And I think you were a judge in one of these debates. What is going on with that? Are people learning good thinking or debating skills from these competitions?
Christian Tarsney: Yeah. So I was a competitive debater for a couple of years in high school and then I coached for many years after graduating, basically until I left grad school and thought I needed to make a break and focus on research. But yeah, in the United States in particular, competitive debate has gone in this very weird esoteric direction where the thing that’s most striking about it to outsiders is how fast everybody talks.
Christian Tarsney: And the reason for that is pretty straightforward, you have speech time. So you have six minutes for your first speech, and then the other debater has seven minutes for the next speech and so forth. And you just want to make as many arguments as you can in that period of time. And you have judges who mostly were competitive debaters before and so they learned to understand more or less imperfectly people talking at 300 or 350 words a minute. And so, yeah, you’re just able to say more things, and the judges are mostly able to understand it.
Christian Tarsney: In terms of whether it, you know, has pedagogical value… I had an amazing experience in high school debate myself, and an amazing experience coaching it. I guess I would say that competitive debate creates a kind of stylized form of argumentation, where for instance, often they are arguments that are bad, but you can make them very quickly and explaining why they’re bad takes a long time. And so it’s a good argument within the game of debate, right? Because it forces your opponent to waste a lot of their time explaining why your bad argument is bad.
Christian Tarsney: And I think there are debaters who understand things like this, or understand the limitations of the activity and understand that arguments that succeed in competitive debate aren’t necessarily good arguments. And those debaters can get an enormous amount out of the activity. One of the things that was really rewarding to me is you have high school students going out and reading well, not just Kant, and Locke, and Hobbes, and Mill, but reading contemporary philosophers, reading Nick Bostrom among other people, reading Christine Korsgaard. And in many cases they are really understanding and being very thoughtful and taking away a lot from it.
Christian Tarsney: But I think there also is, if you don’t recognize that it is a kind of stylized argumentative game and that the arguments that are succeeding in debate aren’t necessarily good arguments, and that there’s a higher level of academic rigor that you can ultimately aspire to, then I think it can be intellectually problematic.
Robert Wiblin: Yeah. Poor training for cases where you actually care whether you’re getting the right answer or not.
Christian Tarsney: Yeah.
Robert Wiblin: It seems like a weakness of the scoring system that you get lots of points just like really quickly kind of mumbling arguments that aren’t super persuasive, necessarily.
Robert Wiblin: One, you could have like a point scoring for rhetoric. So like, do people make their arguments in a compelling way that an ordinary person in the speech might find interesting or would be able to follow. And maybe also like, do the judges find the arguments to be compelling as stated, or can they themselves think of counter arguments that you haven’t addressed. Maybe that would return the speech style back to something that would be perhaps a bit more useful in ordinary life?
Christian Tarsney: Yeah. Well, I think it depends on what skill you’re trying to train. So if you’re trying to train public speaking, then certainly competitive Lincoln-Douglas debate or policy debate as it’s practiced in the United States isn’t the way to do that. It doesn’t teach public speaking skills. And there are other activities like, well, speech, like oratory, for instance, that do teach that.
Christian Tarsney: If you’re trying to teach argumentation, well then the fact that we don’t care how rhetorically elegant or persuasive a debater is serves that goal, because you only care about the arguments. And there’s a norm that a lot of debaters accept, or a lot of judges accept, of non-intervention. That I think this is a bad argument, but it’s the job of the other debater to explain why it’s a bad argument. I’m not going to step in and say that it’s a bad argument.
Christian Tarsney: And yeah, I guess my take is it does train argumentation and critical thinking skills, and thinking on your feet and intellectual creativity. As long as you recognize the limitations of what you’re doing and recognize that if you, for instance, go into academia or even when you’re an undergraduate in college and you’re engaged in kind of genuine truth-seeking, there are ideas and skills and knowledge that you can take from competitive debate that are useful there, but you’re not doing the same thing. And the things that work in competitive debate don’t always work in truth-seeking contexts.
Robert Wiblin: Yeah. Have you seen any people do really well at debating, but potentially learn bad epistemics, learn bad lessons about how to think and how to argue, because they’ve just learned this kind of persuasive, throw-out argument style, and that maybe holds them back in other lines of work?
Christian Tarsney: Yeah, I guess I would say the thing that I’ve more commonly seen is people who are very good at the game of debate and just aren’t particularly interested in the actual issues that are being debated. Or maybe they learn some philosophy, for instance, because they need it to win debate rounds, but they’re just not that interested in philosophy. And then if you’re not interested in the subject matter intrinsically, then you’re just not going to become good at thinking about it.
Christian Tarsney: And then there certainly are people who learn things in a superficial way for debate purposes, and don’t immediately recognize the limitations of what they learned or how superficial their understanding is. But I think there are also plenty of people who really do find the questions that they’re debating genuinely interesting and do want to go and learn and think about those questions independently of just trying to win debate rounds.
Robert Wiblin: Okay. I’ll stick up a link to that video, if people want to see the extremely fast-speaking debate style. I hadn’t seen that one before, despite doing debating at high school myself.
The Berry paradox [02:35:00]
Robert Wiblin: Just another question I wanted to ask before we finish. What is your favorite or perhaps the philosophical problem or thought experiment that you think is the most fun? Maybe one that doesn’t necessarily have anything in particular to do with global priorities research.
Christian Tarsney: So a problem that, for whatever reason, I’ve always just found really compelling or fascinating is there’s this whole family of philosophical paradoxes around what are called self-reference, where the famous example is the liar paradox that says, “This sentence is false.” And there’s a thousand different paradoxes in this vicinity.
Christian Tarsney: So a puzzle that I’ve found particularly compelling since I encountered this in undergrad is what’s called the Berry paradox. And the paradox goes like this, there are some expressions in the English language that refer to numbers, for instance two plus two refers to the number four. And of course there are some expressions that don’t. And there’s a finite number of expressions in English of any given length, say fewer than 100 characters long when you write them down. And so there’s a finite number of say natural numbers that can be referred to by English expressions fewer than 100 characters long.
Christian Tarsney: So now consider the following number: the smallest number not named by any English expression of fewer than 100 characters. There should be some such number, right? One, okay, we can name that in fewer than 100 characters, two, and so forth, right? But we’re eventually going to encounter one that you can’t name in fewer than 100 characters. Except that expression, or at least the original Berry version of it, the smallest natural number not named by any English expression of fewer than 100 characters, is only 93 characters long. And so in fact there must be the smallest number not referred to by an English expression of fewer than 100 characters, but also that very number is referred to by an English expression of fewer than 100 characters, namely that expression that I just gave.
Robert Wiblin: That rules out that one, but then what about the next higher one? That’s now the new one that will be referred to you by this.
Christian Tarsney: Yes. But then that’s out, right? If that’s what that 93 character expression refers to.
Robert Wiblin: Yeah.
Christian Tarsney: I can’t say exactly why, but among all the self-reference paradoxes, that’s the one that always just blew my mind.
Robert Wiblin: We’ll stick up a link to maybe a Wikipedia article or another page about that self-reference paradox and maybe some others as well.
Robert Wiblin: Alright. My guest today has been Christian Tarsney. Thanks so much for coming on the 80,000 Hours podcast, Chris.
Christian Tarsney: Thanks, Rob.
Rob’s outro [02:37:24]
In case you don’t know, we at 80,000 Hours have an email newsletter which you might find useful.
If you subscribe, we’ll notify you when we update our job board, which lists hundreds of potentially high-impact job vacancies and stepping-stone roles that might help you have more impact in future.
The earlier you find out about these new opportunities, the less likely you are to miss an application deadline for a role you’d love to get.
We’ll also email you about our new research into potentially pressing problems and ways to solve them, as it comes out.
Emails go out to that list about 2–3 times per month.
You can join by putting in your email address here. If you’ve listened all the way to the end of an episode this intense, you seem like exactly the kind of person who’d enjoy being on the list.
Alright — the 80,000 Hours Podcast is produced by Keiran Harris.
Audio mastering for today’s episode by Ryan Kessler.
Full transcripts are available on our site and made by Sofia Davis-Fogel.
Thanks for joining, talk to you again soon.
Learn more
Global priorities research
Career review: Academic research
Career review: Philosophy academia
Career review: Economics PhD