Hello, I’m Paolo, one of the authors of the article. We were pointed to this thread and we’ve been thrilled to witness the discussion it’s been generating. Romy and I will take some time to go through all your comments in the coming days and will aim to post a follow up blog post in an attempt to answer to the various points raised more comprehensively. In the meantime, please keep posting here and keep up the good discussion! Thanks!
I’m excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:
It’s correct to try do the most good, but people who call themselves “EA’s” define “good” incorrectly. For example, EA’s might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
It’s correct to try to do the most good, but people who call themselves “EA’s” are just empirically wrong about how to do this. For example, EA’s focus too much on short-term benefits and discount long-term value.
It’s incorrect to try to do the most good. (I’m not sure what the alternative you are proposing in your essay is here.)
If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)
Two subcategories of idea 3 that I see, and my steelman of each:
3a. To maximize good, it’s incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.
3b. “Good” cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it’s maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn’t make sense, the decisions are still morally correct.
Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the ‘near termist’ school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I’m curious if you agree.
Neither we nor they had any way of forecasting or quantifying the possible impact of [Extinction Rebellion]
and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.
One think that would help us understand your point is to answer the following question:
If it’s really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there’s no possible way of forecasting which ones will work?
I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional “scientific” and “objective” tools (e.g. cost-benefit analysis, RCTs) , but we’re not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.
I suspect that you might be using words like “scientific”, “objective”, and “rational” in a narrower sense than EAs think of them. For instance, EAs don’t believe that “rationality” means “don’t accept any idea that is not backed by clear scientific evidence,” because we’re aware that often the evidence is incomplete, but we have to make a decision anyway. What a “rational” person would say in that situation is something more like “think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past.”
One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was. So while I think that the post was written with the goal of demarcating and pushing “your brand” of radical social justice from EA, you clearly seem to agree with the core “EA assumption” (i.e., that it’s good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice.
Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;)
Hello, I’m Paolo, one of the authors of the article. We were pointed to this thread and we’ve been thrilled to witness the discussion it’s been generating. Romy and I will take some time to go through all your comments in the coming days and will aim to post a follow up blog post in an attempt to answer to the various points raised more comprehensively. In the meantime, please keep posting here and keep up the good discussion! Thanks!
I’m excited to hear that! Looking forward to seeing the article. I particularly had trouble distinguishing between three potential criticisms you could be making:
It’s correct to try do the most good, but people who call themselves “EA’s” define “good” incorrectly. For example, EA’s might evaluate reparations on the basis of whether they eliminate poverty as opposed to whether they are just.
It’s correct to try to do the most good, but people who call themselves “EA’s” are just empirically wrong about how to do this. For example, EA’s focus too much on short-term benefits and discount long-term value.
It’s incorrect to try to do the most good. (I’m not sure what the alternative you are proposing in your essay is here.)
If you are able to elucidate which of these criticisms, if any, you are making, I would find it helpful. (Michael Dickens writes something similar above.)
Two subcategories of idea 3 that I see, and my steelman of each:
3a. To maximize good, it’s incorrect to try to do the most good. Most people who apply a maximization framework to the amount of justice will create less justice than people who build relationships and seek to understand power structures, because thinking quantitatively about justice is unlikely to work no matter how carefully you think. Or, most of the QALYs that we can create result from other difficult-to-quantitatively-maximize things like ripple effects from others modeling our behavior. Trying to do the most good will create less good than some other behavior pattern.
3b. “Good” cannot be quantified even in theory, except in the nitpicky sense that mathematically, an agent with coherent preferences acts as if it’s maximizing expected utility. Such a utility function is meaningless. Maybe the utility function is maximized by doing X even though you think X results in a worse world than Y. Maybe doing Z maximizes utility, but only if you have a certain psychological framing. Even though this doesn’t make sense, the decisions are still morally correct.
I think something like 3a is right, especially given our cluelessness.
Hi Paolo, I apologise this is just a hot take, but from quickly reading the article, my impression was that most of the objections apply more to what we could call the ‘near termist’ school of EA rather than the longtermist one (which is very happy to work on difficult-to-predict or quantify interventions). You seem to basically point this out at one point in the article. When it comes to the longtermist school, my impression is that the core disagreement is ultimately about how important/tractable/neglected it is to do grassroots work to change the political & economic system compared to something like AI alignment. I’m curious if you agree.
You mention that:
and go on to talk about this is an example of the type of intervention that EA is likely to miss due to lack of quantifiability.
One think that would help us understand your point is to answer the following question:
If it’s really not possible to make any kind of forecast about the impact of grassroots activism (or whatever intervention you would prefer), then on what basis do you support your claim that supporting grassroots activism would improve its impact? And how would you have any idea which groups or which forms of activism to fund, if there’s no possible way of forecasting which ones will work?
I think the inferential gap here is that (we think that) you are advocating for an alternative way of justifying [the claim that a given intervention is impactful] other than the traditional “scientific” and “objective” tools (e.g. cost-benefit analysis, RCTs) , but we’re not really sure what you think that alternative justification would look like or why it would push you towards grassroots activism.
I suspect that you might be using words like “scientific”, “objective”, and “rational” in a narrower sense than EAs think of them. For instance, EAs don’t believe that “rationality” means “don’t accept any idea that is not backed by clear scientific evidence,” because we’re aware that often the evidence is incomplete, but we have to make a decision anyway. What a “rational” person would say in that situation is something more like “think about what we would expect to see in a world where the idea is true compared to what we would expect to see if it were false, see which is closer to what we do see, and possibly also look at how similar things have turned out in the past.”
One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was. So while I think that the post was written with the goal of demarcating and pushing “your brand” of radical social justice from EA, you clearly seem to agree with the core “EA assumption” (i.e., that it’s good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice.
Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;)