Stefan Schubert: Moral Aspirations and Psychological Limitations

In this talk from EAGx Nordics 2019, Stefan Schubert explores the psychological obstacles that stop us from maximizing our moral impact, and suggests strategies to overcome them.

A transcript of Stefan’s talk is below, which CEA has lightly edited for clarity. You can also watch this talk on YouTube or read the original transcript on Stefan’s website.

The Talk

Effective altruism is of course about doing the most good. The standard way in which effective altruism is applied is by taking what I call the external perspective. On this perspective, you look out into the world, and you try to the find the most high-impact problems which you can solve. Then you try to solve them in the most effective way. These can be problems like artificial intelligence risks or global poverty.

But there is another perspective, which is the one that I want to take today: what I call the internal perspective. Here you rather look inside yourself, and you think about your own psychological obstacles to doing the most good. These could be obstacles like selfishness and poor epistemics. Together with Lucius, I’m studying these obstacles experimentally at Oxford, but today I won’t talk so much about that experimental work. Instead, I will try to provide a theoretical framework for thinking about this internal perspective.

Ever since the start of the effective altruism movement, people have talked about these psychological obstacles. However, the internal perspective hasn’t been worked out, I would say, in as much detail as the external perspective has. So in this talk I want to give a systematic account of the internal perspective to effective altruism.



The structure of this talk is as follows. First, I’ll talk about psychological obstacles to doing the most good. Then, I will talk about how to do the most good you can, given that you have these psychological obstacles. There will also be sub-sections to both of these main sections.



The first of these sub-sections concerns three particular obstacles to doing the most good. To do the most good, you need to, first, allocate sufficient resources to moral goals. Second, you need to work effectively towards those moral goals. And third, you need to have the right moral goals. There are psychological obstacles to doing the most good associated with all of these three factors. I will walk you through all of them in turn.



The first factor is the resources we allocate to moral goals. A bit of a simplified picture (I will add some nuance to it later) is that you can allocate some of your resources (like money and time) towards non-moral goals—typically selfish goals—and other resources towards moral goals. (I should also point out that the quantities and numbers during the first part of the talk are just examples—they are not to be taken literally.)

Now we are in a position to see our first obstacle: that we allocate insufficient resources to others. Here we see the dashed red line—that’s how much you ideally should allocate to others. We see that you actually allocate less than that—for instance, because you are selfish. So you allocate insufficient resources to others; that’s the first obstacle.

And now we can define what I call the altruism ratio: the amount of resources that you actually allocate to others, divided by the amount of resources that you potentially or ideally should allocate to others. We see that the altruism ratio is 12. Again, this is just an example.

The second factor is effectiveness: how good you are at translating your moral resources into impact. We know, of course, that people are often very ineffective when they are trying to do good; to help others. That’s part of the reason why the effective altruism movement was set up in the first place.

We can also define an effectiveness ratio analogous to the altruism ratio. This is just your actual effectiveness divided by the potential maximum effectiveness of the most effective intervention.

But what ultimately counts is your impact on a “correct” moral goal, and your moral goal may be flawed. Historically, people’s moral goals were often flawed, meaning that we have reason to believe that our moral goals may be flawed as well. (I should point out that when I talk about “the correct moral goals” it might seem as if I am suggesting that moral realism is true—that there are objective moral truths. However, I think that one can talk in this way even if one thinks that moral anti-realism is true. But I won’t go into details on that question here.)

Rather, let’s notice that it’s important that your moral goal and the correct goal are aligned. That will often not be the case. There will be goal misalignment, when you have a flawed moral goal and your method of reaching it doesn’t help much with the correct moral goal. For instance, suppose that your goal is to maximize welfare in Sweden over the short run, and that the correct moral goal is to maximize welfare impartially over the long run. Then your actions in service of your moral goal might not help very much with the correct goal.

----

Now we can define an alignment ratio, which is the effectiveness of your work towards the correct goal divided by the effectiveness your work towards your goal. This ratio will often be much less than one.

(I should also say, as a side note, that this definition only works in a special case: when maximally effective interventions towards the two goals are equally effective. Otherwise, you need to have a slightly more complex definition. However, I won’t go into those details in order not to be bogged down by math. The spirit of this definition will remain the same.)

Let’s move on to a formula for calculating impact loss. We want a formula for calculating how much impact we lose because of these three psychological obstacles. That will be useful to calculate the benefits of overcoming the three obstacles. Therefore we need a definition of the impact ratio, which is associated with impact loss, and a formula for calculating the impact ratio as a function of the other three ratios.

The impact ratio is simple: it is just your actual total impact divided by your potential total impact. We see that in this example the vast majority of the potential impact is lost. The actual impact is just a tiny fraction of the potential impact.

Now we want to calculate the impact ratio as a function of the other three ratios. I came up with a somewhat simplified formula (which I think is not quite correct but is a good first stab), which is that the impact ratio is the product of the other three ratios. So if the altruism ratio is 12 and the effectiveness ratio is 13 and the alignment ratio is 14 then the impact ratio is 124, which of course is a very small number.


Some implications of this formula:

  • First, the impact ratio is smaller than the other ratios, because the other ratios will at most be one and often less.

  • Also, the impact ratio is typically very small, meaning that one has vast opportunities to increase one’s impact. In this example, the impact ratio was 124; thus, one could increase one’s impact 24 times.

  • And lastly, the potential benefits of improving on small ratios are larger than the potential benefits of improving on large ratios. For example, you can potentially double your impact if your ratio is 12, but triple it if your ratio is 13. (Of course, it might be harder to improve on the small ratios, but in principle you can improve your impact more via focusing on them.)

----

Let’s move on to the underlying causes of these obstacles. Here I will again give a simplified account, focusing on the causes which we have reason to believe that we can change.

The first cause is that you don’t know what you should do: you have false beliefs or incorrect values.

Second, you might know what you should do, but you don’t actually do it: you suffer from psychologists call intention-behavior gaps. A classic example of an intention-behavior gap is that you want to quit smoking but nevertheless continue to smoke. You fail to behave in line with your considered intentions or values.

On false beliefs:

  • First, we could have false beliefs about morality or incorrect moral values.

    • We might underestimate our moral obligations, which might lead to the first obstacle: that we invest insufficiently in others.

    • We might have false beliefs about what moral goals we ought to have, which can lead to the third obstacle: that our moral goals are misaligned with the correct moral goal.

  • Second, we could have false empirical beliefs.

    • For example, we might have false beliefs about the effectiveness of different interventions, which can lead to the second obstacle: ineffective interventions.

    • These false beliefs may in turn be due to poor epistemics. We know from psychological research that humans are often poor at statistical inference, for instance. But there are also epistemic failures which are specific to moral issues. For instance, there is political bias—that people have a tendency to acquire empirical beliefs just because those beliefs support their political views. Also, they tend to use intuition-based moral thinking—they acquire moral beliefs not because of good reasons but just because of intuition.

----

Moving on to intention-behavior gaps: one kind of gap is when you have a moral intention, but you behave selfishly: you fail to resist selfishness. That can lead to the first obstacle. That’s obviously well-known.

But there is another kind of intention-behavior gap which is, I think, less widely discussed. That’s when you have moral intentions, and you do behave morally, but you behave morally in another way, as it were. You intend to effectively pursue a certain moral goal, but you actually choose interventions which you know are ineffective, or you pursue another moral goal. This can lead to the second or the third obstacle. Here you fail to behave in line with your considered moral views. Rather, you behave in line with other moral impulses.

For instance, you might passionately support a specific cause. You might know that indulging in that passion is not the way to do the most good, but you might fail to resist this passion. Similarly, you might have a tribal impulse to engage in animated discussions with your political opponents. You might know that this is not an effective way of doing the most good, but you fail to resist this impulse.

We see that there are multiple psychological obstacles to doing the most good. There is a lot of relevant moral-psychological research on these topics, by people like Jonathan Haidt, Paul Bloom, and many others. They have demonstrated that in the moral domain, we’re often very emotion-driven: our actions are quite haphazard, our epistemics are not too good, etc.

Much of this literature is highly relevant for effective altruism, but one thing that’s mostly missing from it is an account of what kind of mindset we should have instead of the default moral mindset. The authors I mentioned focus on describing the default moral mindset and say that it’s not so good, but they don’t develop a systematic theory of what alternative moral mindset we should have.

Here I think that the effective altruist toolbox for thinking could help. In the remainder of this talk, I’ll use this toolbox to think about how to do the most good you can, given these obstacles. I think that we should honestly admit that we have psychological limitations. Then we should think carefully about which obstacles are most important to overcome, and how we can do that.

(I should also point out that my hypotheses of how to do this are just tentative, and subject to change.)

----

First, let me talk about the benefits and costs of overcoming the three obstacles. When we think about this, we need to have a reference person in mind to calculate the benefits of overcoming the obstacles. The hypothetical reference person here is a person who has just found out about effective altruism, but hasn’t actually changed their behavior much.

Let’s now focus on the benefits and costs of overcoming the three particular obstacles. First, increasing altruism. The altruism ratio varies a lot with kind of resource. It’s perhaps easiest to calculate with regards to donations. If your donations are 2% of your income and your ideal donations are 20% of your income, then the altruism ratio is 10%. Of course, some moral theories might be more demanding, in which case the ratio would be different.

It is a bit more difficult to calculate the altruism ratio regarding direct work, but in general I would say that for many people the benefits of increasing the altruism ratio can be considerable. However, it may be psychologically difficult to increase the altruism ratio beyond a certain point. We know from historical experience that people have a hard time being extremely self-sacrificial.

Let’s move on to increasing effectiveness. The effectiveness ratio may often be very low. Lucius and I are studying the views of experts on the effectiveness of different global poverty charities. We find that the experts think that the most effective global poverty charities are 100 times more effective than the average global poverty charity. If that’s true, there are vast benefits to be made from finding the very best charities. And, even if that number happened to be a bit lower, there would be large benefits to finding the best charities.

At the same time, the costs of switching to the best charities seem to be relatively low. First, you need to do research about what interventions are best. That seems quite low-cost. Bridging your intention gap might be somewhat higher-cost, because you might feel quite strongly about a specific intervention, but that psychological cost still seems to me to often be lower than the cost associated with bridging intention-behavior gaps due to selfish impulses. I would think that most people feel more strongly about giving up a significant portion of their income than they feel about ceasing to support a specific charity.

I should also point out that people seem to think a bit differently about lost impact which is due to too little altruism, compared to lost impact which is due to too little effectiveness.

If someone were to decrease their impact by 99% because they decreased their donations by 99%, they would probably feel guilty about that. But we don’t tend to feel similarly guilty if we decrease our impact by 99% through decreased effectiveness. Thus, it seems that we don’t have quite the right intuitions when we think about effectiveness. This is another reason to really focus on effectiveness and really try to go for the most effective interventions.

Let’s move on to acquiring the right moral goals.

The alignment ratio (the extent to which working toward your personal goals leads to working toward correct moral goals) may be very low. One reason for that is that, as we already saw, you have some reason to believe that your moral views are wrong, because people have often been wrong in the past. And if your moral views are wrong, and you are working towards incorrect moral goals, then you may have a small or negative impact on the correct moral goal. It would be very lucky indeed if it just so happened that our work towards one moral goal were also effective towards a very different moral goal. We should not count on being lucky in that way.

This means that the benefits of finding the right moral goal might be very high. Of course, we might not actually find it if we try, but the expected benefits might still be high. And the psychological cost of changing moral goals may be quite small, for the same reasons as the psychological costs of increasing effectiveness may be small.

----

At this point, let me look at an additional way that we can increase our impact. We’ve focused on how to expand and use our moral resources. We haven’t discussed our non-moral or selfish resources. An additional way of increasing impact is through coordinating between our moral and selfish selves. This can be seen as an aspect of effectiveness, but it’s different from the kind of effectiveness that we’ve focused on so far, which has been about how to use your moral resources most effectively.

For instance, you might find the most effective option that you could choose intolerable from a selfish perspective. It might be some job that you could take but which you just don’t like from a selfish point of view. Then you should try to find the most effective compromise: such as a job which is still high-impact (if not quite as high-impact as the highest-impact job) and which is better from a selfish perspective.

Similarly, you may consider what particular selfish obstacles to focus on to overcome. For instance, your donations might be too small; or so you feel. Or you might be employing self-serving reasoning because you don’t want to admit that you were wrong about something. Thereby you nudge other effective altruists slightly in the wrong direction, and decrease their impact. Or you might have a disagreeable personality, leading people who know you not to want to get involved in the effective altruism movement, which lowers your impact. When you are considering what selfish obstacle to overcome, you should be altruistic first and foremost where doing so has the highest impact and the smallest selfish costs.

Let’s move on to the last section, which is on how to address the underlying causes. First, we should correct false beliefs. We should learn what it is that we should do, by improving our epistemic rationality. And we should bridge our intention-behavior gaps. We should actually do what we know that we should do. We should improve our instrumental rationality.

Let’s walk through these in turn.

First: correcting false beliefs and improving our epistemics.

Some aspects to improve:

  • We should search for knowledge systematically, including knowledge about how to acquire knowledge—what is called epistemology. This seems to be something that we don’t naturally do: we don’t have a knowledge-oriented mindset in the moral domain. We should change that.

  • We should also overcome motivated reasoning and tribal tendencies. We should develop epistemic virtues such as intellectual curiosity and intellectual honesty.

    • One thing that arguably could help here is to make good epistemics part of your identity. Similarly, developing a community culture emphasizing good epistemics could help. Both of these things are things that effective altruists often try to do.

  • We should also bridge our intention-behaviour gaps. To do that, we should develop instrumental rationality virtues, such as having an impact-focus. Here, again, changing our identity might be useful: to make doing the most good a part of our identity.

    • However, I think that making doing the most good part of our identity might help more with overcoming passions for a specific causes than with overcoming selfishness—which might, in line with what what I said before, be a stronger urge which is harder to overcome.

Conclusions:

  • We could greatly increase our impact through overcoming obstacles to doing the most good.

  • Obstacles to acting altruistically, pursuing the most effective interventions, or acting on the correct moral goal can be very important to overcome.

To overcome these obstacles, we should acquire more knowledge and develop better epistemics: we should learn what it is that we should do. We should bridge our intention-behavior gaps: we should actually do what we know that we should do. And lastly, we should replace the default moral mindset, which we saw is quite emotion-driven and characterized by poor epistemics and haphazard decision-making, with an impact-maximizing mindset and impact-maximizing virtues.