I found reading this valuable. Iâve long-thought there seems to be a way of thinking that is widely shared among effective altruists (even if it doesnât need to be â the project is not committed to a particular view). But itâs difficult to pinpoint exactly what this is, and I think youâve done an excellent job of that here.
If you get the time, I think itâd be valuable to produce an executive summary of this (even if itâs just in dot-points) as I suspect itâll get a much wider reach that way than through this link post (and I think it deserves this reach).
Alternatively, there are quite a few threads in this essay, it could be worthwhile publishing it as a sequence, going through one chapter at a time.
Here are a few initial comments/âquestions; I might add more if I get time.
I agree with your characterisation of EA as tending to be âmethodologically individualistâ â though I donât quite follow your âprocessâ focussed alternative. Can you offer a real-world example of where the two different methodologies might conflict?
I think the idea that some charities (and interventions more broadly) can do potentially orders of magnitude as much good as others, is pretty core to the epistemic foundations of EA. Itâs a fact that motivates optimising, and I think recognising this and taking that seriously has been one of the reasons EA has to-date been so successful (e.g., GiveWell has massively improved how well-funded these best charities are).
If I were to accept that a lot of your criticism about this optimisation-mindset is correct, how could I avoid throwing the baby out with the bath water?
Perhaps another way to frame this: it seems to me there are many cases where the types of reasoning youâve criticised (formal, precise, quantified, maximising) are the very things that led EA to be quite successful to-date, as they seem to be tremendously effective in many domains (even complex ones!). Do you agree with the premise of my question? If so, how do you tell when it is appropriate to avoid this style of reasoning (or perhaps, how do you tell which parts of this reasoning to jettison?).
Thanks for reading the whole thing, for your kind words and for your considered criticism.
First, your doubt about my idea of a âprocessâ-based approach to ethics. My discussion of the idea of static vs dynamic ethics in the essay is very abstract, so I understand your desire to understand this at a more concrete level.
In basic terms, the distinction is just between thinking about specific interventions vs thinking about policies. Thatâs why I said the static/âdynamic distinction mapped to the distinction between expected utility maximisation and the Kelly criterion. One considers how to do best in one-off actions â maximise payoff, the other doing multiple actions embedded in time â maximise growth rate of payoff. When it comes to ethics, I think everyone is capable of both ways of thinking, and everyone practises both ways of thinking in different contexts.
When it comes to traditional ethical theories, I would say Act Consequentialism is the most static. Virtue ethics, Confucian role ethics, and Deontology are all more on the dynamic side (since they offer policies). But this is just a rough statement. And I also donât mean to imply by that that Act Consequentialism is the worst ethical theory.
The main worry from my point of view is when the static approach dominates oneâs method of analysis. One way in which this manifests (albeit arguably of little relevance to EA) is in Utopian political projects. People reason that it would be good if we reached some particular state of affairs but donât reason well about the effects of their interventions in pursuit of such a goal. In part, the issue here is thinking of the goal as a âstateâ, rather than a âprocessâ. A society is a very complex, self-organising process, so large interventions need to be understood in process-theoretic terms.
But itâs not just Utopian thinking. I believe that often technocratic thinking, as practised by EA, can fall into similar traps. Iâm not an expert on this stuff myself so I have no idea what he is right or wrong about, but people in the community probably know that Angus Deaton has made criticisms of some EA-endorsed interventions from exactly this kind of perspective (his claim being that the interventions are too naive because they donât understand the system theyâre intervening in).
Along somewhat different lines, I also made the point in the essay that certain formal questions in utilitarian ethics only seem vital from a static-first perspective. MacAskill spills a lot of ink on population ethics in What We Owe the Future because he sees it as actually having (some) real-world relevance in terms of how we should think about existential risk. On MacAskillâs perspective, it matters because if we can use Population Ethics to prove that you should want to statically maximise the number of beings, and the number that will exist in the far future mostly just depends on whether all of humanity goes extinct or not, then we should care way more about really existential risks (like AI) vs maybe not really existential risks (like climate change). I donât agree with caring way more about AI than climate change, though in large part I think thatâs because of different empirical beliefs about the relative risks of those. But thatâs not even the point. The point is just that there is an alternative worldview where Population Ethics need never come up. My highest-level ethical perspective is not precise but something like âMaximise the growth in complexity of civilisation without ruining the biosphereâ. My views about existential risk follow from that. (Which are, by the way, that itâs the worst possible thing to happen, so in that sense I totally agree with MacAskill, but I get the bonus that I donât have to lay awake at night worrying about Derek Parfit.)
Ok, now for your other critique/âquestion, which is basically how do we take on board a critique of optimisation without losing whatâs good and useful and effective about EA? I think I agree with the premise of your last question, which is that EA has done some really good stuff and that itâs been based on formal methods that Iâve critiqued.
I guess thereâs different levels of response I could have to this. Maybe the essay doesnât always read like this, but I would say my main goal was to describe the limitations of expected utility reasoning and optimisation-centric perspectives, but not to rule them out completely. What I would say is that Effective Altruism is not the only possible approach to doing good in the world, and I do think itâs very important to understand this. To me, the right way of thinking about this is in an ecological way. I think different ways of doing good have different ecosystem functions. I think adding Effective Altruism into the mix has probably made the world of philanthropy a lot more effective and good, as you suggest, but philanthropy shouldnât be the main way we make the world better in any case. Taking this to an extreme to illustrate a point: I think it would be far better if every nation in the world had good, solid, democratic governments than if every person in the world was an Effective Altruist but every nation was ruled by tyrants.
Ultimately, I donât know what Effective Altruism should jettison or what it should keep. That wasnât really the point of my essay, and I have no good answers⌠Except maybe to say that, in its intellectual methodologies, Iâm sure thereâs some things it could learn from the fields I discuss in the essay. Maybe the main thing is a good dose of humility.
I found reading this valuable. Iâve long-thought there seems to be a way of thinking that is widely shared among effective altruists (even if it doesnât need to be â the project is not committed to a particular view). But itâs difficult to pinpoint exactly what this is, and I think youâve done an excellent job of that here.
If you get the time, I think itâd be valuable to produce an executive summary of this (even if itâs just in dot-points) as I suspect itâll get a much wider reach that way than through this link post (and I think it deserves this reach).
Alternatively, there are quite a few threads in this essay, it could be worthwhile publishing it as a sequence, going through one chapter at a time.
Here are a few initial comments/âquestions; I might add more if I get time.
I agree with your characterisation of EA as tending to be âmethodologically individualistâ â though I donât quite follow your âprocessâ focussed alternative. Can you offer a real-world example of where the two different methodologies might conflict?
I think the idea that some charities (and interventions more broadly) can do potentially orders of magnitude as much good as others, is pretty core to the epistemic foundations of EA. Itâs a fact that motivates optimising, and I think recognising this and taking that seriously has been one of the reasons EA has to-date been so successful (e.g., GiveWell has massively improved how well-funded these best charities are).
If I were to accept that a lot of your criticism about this optimisation-mindset is correct, how could I avoid throwing the baby out with the bath water?
Perhaps another way to frame this: it seems to me there are many cases where the types of reasoning youâve criticised (formal, precise, quantified, maximising) are the very things that led EA to be quite successful to-date, as they seem to be tremendously effective in many domains (even complex ones!). Do you agree with the premise of my question? If so, how do you tell when it is appropriate to avoid this style of reasoning (or perhaps, how do you tell which parts of this reasoning to jettison?).
Hi Michael,
Thanks for reading the whole thing, for your kind words and for your considered criticism.
First, your doubt about my idea of a âprocessâ-based approach to ethics. My discussion of the idea of static vs dynamic ethics in the essay is very abstract, so I understand your desire to understand this at a more concrete level.
In basic terms, the distinction is just between thinking about specific interventions vs thinking about policies. Thatâs why I said the static/âdynamic distinction mapped to the distinction between expected utility maximisation and the Kelly criterion. One considers how to do best in one-off actions â maximise payoff, the other doing multiple actions embedded in time â maximise growth rate of payoff. When it comes to ethics, I think everyone is capable of both ways of thinking, and everyone practises both ways of thinking in different contexts.
When it comes to traditional ethical theories, I would say Act Consequentialism is the most static. Virtue ethics, Confucian role ethics, and Deontology are all more on the dynamic side (since they offer policies). But this is just a rough statement. And I also donât mean to imply by that that Act Consequentialism is the worst ethical theory.
The main worry from my point of view is when the static approach dominates oneâs method of analysis. One way in which this manifests (albeit arguably of little relevance to EA) is in Utopian political projects. People reason that it would be good if we reached some particular state of affairs but donât reason well about the effects of their interventions in pursuit of such a goal. In part, the issue here is thinking of the goal as a âstateâ, rather than a âprocessâ. A society is a very complex, self-organising process, so large interventions need to be understood in process-theoretic terms.
But itâs not just Utopian thinking. I believe that often technocratic thinking, as practised by EA, can fall into similar traps. Iâm not an expert on this stuff myself so I have no idea what he is right or wrong about, but people in the community probably know that Angus Deaton has made criticisms of some EA-endorsed interventions from exactly this kind of perspective (his claim being that the interventions are too naive because they donât understand the system theyâre intervening in).
Along somewhat different lines, I also made the point in the essay that certain formal questions in utilitarian ethics only seem vital from a static-first perspective. MacAskill spills a lot of ink on population ethics in What We Owe the Future because he sees it as actually having (some) real-world relevance in terms of how we should think about existential risk. On MacAskillâs perspective, it matters because if we can use Population Ethics to prove that you should want to statically maximise the number of beings, and the number that will exist in the far future mostly just depends on whether all of humanity goes extinct or not, then we should care way more about really existential risks (like AI) vs maybe not really existential risks (like climate change). I donât agree with caring way more about AI than climate change, though in large part I think thatâs because of different empirical beliefs about the relative risks of those. But thatâs not even the point. The point is just that there is an alternative worldview where Population Ethics need never come up. My highest-level ethical perspective is not precise but something like âMaximise the growth in complexity of civilisation without ruining the biosphereâ. My views about existential risk follow from that. (Which are, by the way, that itâs the worst possible thing to happen, so in that sense I totally agree with MacAskill, but I get the bonus that I donât have to lay awake at night worrying about Derek Parfit.)
Ok, now for your other critique/âquestion, which is basically how do we take on board a critique of optimisation without losing whatâs good and useful and effective about EA? I think I agree with the premise of your last question, which is that EA has done some really good stuff and that itâs been based on formal methods that Iâve critiqued.
I guess thereâs different levels of response I could have to this. Maybe the essay doesnât always read like this, but I would say my main goal was to describe the limitations of expected utility reasoning and optimisation-centric perspectives, but not to rule them out completely. What I would say is that Effective Altruism is not the only possible approach to doing good in the world, and I do think itâs very important to understand this. To me, the right way of thinking about this is in an ecological way. I think different ways of doing good have different ecosystem functions. I think adding Effective Altruism into the mix has probably made the world of philanthropy a lot more effective and good, as you suggest, but philanthropy shouldnât be the main way we make the world better in any case. Taking this to an extreme to illustrate a point: I think it would be far better if every nation in the world had good, solid, democratic governments than if every person in the world was an Effective Altruist but every nation was ruled by tyrants.
Ultimately, I donât know what Effective Altruism should jettison or what it should keep. That wasnât really the point of my essay, and I have no good answers⌠Except maybe to say that, in its intellectual methodologies, Iâm sure thereâs some things it could learn from the fields I discuss in the essay. Maybe the main thing is a good dose of humility.