Only a minority of EA’s total impact comes from immediate poverty relief.
this takes on the burden of all the historical and qualitative arguments it has avoided e.g. the tricky stuff about the cultural impact of certain kinds of rhetoric, problems of power and compromise, the holistic and long term impact of the changes it seeks, the relationship between its goals and its methods etc
Sure. Now that we are really talking about donations to movement building rather than bed nets. But it’s not prima facie obvious that these things will point against EA rather than in favor of it. So we start with a basic presumption that people who aim at making the world better will on average make the world better overall, compared to those who don’t. Then, if the historical and qualitative arguments tell us otherwise about EA, we can change our opinion. We may update to think EA is worse than we though before, or we may update to think that it’s even better.
However, critics only seem to care about dimensions by which it would be worse. Picking out the one or two particular dimensions where you can make a provocative enough point to get published in a humanities journal is not a reliable way to approach these questions. It is easy to come up with a long list of positive effects, but “EA charity creates long-run norms of more EA charity” is banal, and nobody is going to write a paper making a thesis out of it. A balanced overview of different effects along multiple dimensions and plausible worldviews is the valid way to approach it.
reliant on deeply uncertain evidence and thus to some extent a matter of faith and commitment rather than certainty
You still don’t get it. You think that if we stop at the first step—“our basic presumption that people who aim at making the world better will on average make the world better overall”—that it’s some sort of big assumption or commitment. It’s not. It’s a prior. It is based on simple decision theory and thin social models which are well independent of whether you accept liberalism or capitalism or whatever. It doesn’t mean they are telling you that you’re wrong and have nothing to say, it means they are telling you that they haven’t yet identified overall reason to favor what you’re saying over some countervailing possibilities.
You are welcome to talk about the importance of deeper investigation but the idea that EAs are making some thick assumption about society here is baseless. Probably they don’t have the time or background that you do to justify everything in terms of lengthy reflectivist theory. Expecting everyone else to spend years reading the same philosophy that you read is inappropriate; if you have a talent then just start applying it, it don’t attack people just because they don’t know it already. (or, worse, attack people for not simply assuming that you’re right and all the other academics are wrong.)
It is actually ‘prima facie obvious’ to some people that philanthropic do-gooders—those who ‘aim at making the world better’ through individualised charity are not actually having a positive impact. This kind of critique of charity and philanthropy is much older than EA.
So maybe everyone will agree with the thin claim that people who try to make the world better in some way will usually have more positive impact than those who don’t try. But this has no implications for charity vs politics or anything else—it seems to be no more than the truism that its good to care about goodness. Though I guess some consequentialists would quibble with that too.
I did indeed cherry pick some provocative issues to put in the article, but this was to illustrate the complexity of the issues rather than just to score cheap points.
And you may well be right that I don’t get it, as I am not steeped in decision theory or Bayesian methods. So maybe the point is better put this way: EAs do indeed have this prior assumption that charity is good and a charitable movement is a good movement. But plenty of people quite justifiably have the opposite prior—that charity is mostly bad and that a charitable movement does more harm than good by slowing down and distracting from necessary change. If so, EA and its critics are in the same position—and so its not reasonable for EAs to chide their critics for somehow not caring about doing good or adopting anti-charity positions because it makes them sound cool to their radical friends.
It is actually ‘prima facie obvious’ to some people that philanthropic do-gooders—those who ‘aim at making the world better’ through individualised charity are not actually having a positive impact
And those people are wrong and lacking in good reasons for their point of view. (They’re also rare.)
But this has no implications for charity vs politics or anything else - it seems to be no more than the truism that its good to care about goodness.
You think that just because something is a truism, it has no implications? It contradicts your point of view, and you think it’s a truism with no implications? It tells us that we don’t need to play your game of overconfident subjective interpretations of the world in order to justify our actions.
I did indeed cherry pick some provocative issues to put in the article, but this was to illustrate the complexity of the issues rather than just to score cheap points.
But you gave a very narrow take where the “complexity of the issues” is actually reducing everything into a singular goal of implementing socialism. As I said already, you are picking one or two dimensions of the issue and ignoring others. You only talk about the kind of complexity that can further your point of view. That’s not illustrating complexity, it’s pretending that it doesn’t exist.
EAs do indeed have this prior assumption that charity is good and a charitable movement is a good movement.
You are misquoting me. I did not provide this as a prior assumption. I don’t grow the EA movement because of some prior assumption, I grow it because everywhere I look it is epistemically and morally superior to its alternatives, and each project it pursues is high-leverage and valuable. The prior assumption is that, when something is aimed at EA goals, it probably helps achieve EA goals.
If so, EA and its critics are in the same position
From your point of view, literally everyone is in the “same position” because you think that everyone’s point of view follows from subjective and controversial assumptions about the world. So sure, critics might be in the Same Position as EA, but only in the same banal and irrelevant sense that antivaxxers are in the Same Position as mainstream scientists, that holocaust deniers are in the Same Position as mainstream historiography, and so on for any dispute between right people and wrong people. But of course we can make judgments about these people: we can say that they are not rigorous, and that they are wrong, and that they are biased, and that they must stop doing harm to the world. So clearly something is missing from your framework. And whenever you identify that missing piece, it’s going to be the place where we stuff our criticisms (again, assuming that we are using your framework).
and so its not reasonable for EAs to chide their critics for somehow not caring about doing good or adopting anti-charity positions because it makes them sound cool to their radical friends.
There’s something annoying about writing a whole paper that is essentially jockeying for status rather than arguing for any actual idea on the object level. Interesting that this is the pattern for leftists nowadays.
Only a minority of EA’s total impact comes from immediate poverty relief.
Sure. Now that we are really talking about donations to movement building rather than bed nets. But it’s not prima facie obvious that these things will point against EA rather than in favor of it. So we start with a basic presumption that people who aim at making the world better will on average make the world better overall, compared to those who don’t. Then, if the historical and qualitative arguments tell us otherwise about EA, we can change our opinion. We may update to think EA is worse than we though before, or we may update to think that it’s even better.
However, critics only seem to care about dimensions by which it would be worse. Picking out the one or two particular dimensions where you can make a provocative enough point to get published in a humanities journal is not a reliable way to approach these questions. It is easy to come up with a long list of positive effects, but “EA charity creates long-run norms of more EA charity” is banal, and nobody is going to write a paper making a thesis out of it. A balanced overview of different effects along multiple dimensions and plausible worldviews is the valid way to approach it.
You still don’t get it. You think that if we stop at the first step—“our basic presumption that people who aim at making the world better will on average make the world better overall”—that it’s some sort of big assumption or commitment. It’s not. It’s a prior. It is based on simple decision theory and thin social models which are well independent of whether you accept liberalism or capitalism or whatever. It doesn’t mean they are telling you that you’re wrong and have nothing to say, it means they are telling you that they haven’t yet identified overall reason to favor what you’re saying over some countervailing possibilities.
You are welcome to talk about the importance of deeper investigation but the idea that EAs are making some thick assumption about society here is baseless. Probably they don’t have the time or background that you do to justify everything in terms of lengthy reflectivist theory. Expecting everyone else to spend years reading the same philosophy that you read is inappropriate; if you have a talent then just start applying it, it don’t attack people just because they don’t know it already. (or, worse, attack people for not simply assuming that you’re right and all the other academics are wrong.)
And a much delayed response...
It is actually ‘prima facie obvious’ to some people that philanthropic do-gooders—those who ‘aim at making the world better’ through individualised charity are not actually having a positive impact. This kind of critique of charity and philanthropy is much older than EA.
So maybe everyone will agree with the thin claim that people who try to make the world better in some way will usually have more positive impact than those who don’t try. But this has no implications for charity vs politics or anything else—it seems to be no more than the truism that its good to care about goodness. Though I guess some consequentialists would quibble with that too.
I did indeed cherry pick some provocative issues to put in the article, but this was to illustrate the complexity of the issues rather than just to score cheap points.
And you may well be right that I don’t get it, as I am not steeped in decision theory or Bayesian methods. So maybe the point is better put this way: EAs do indeed have this prior assumption that charity is good and a charitable movement is a good movement. But plenty of people quite justifiably have the opposite prior—that charity is mostly bad and that a charitable movement does more harm than good by slowing down and distracting from necessary change. If so, EA and its critics are in the same position—and so its not reasonable for EAs to chide their critics for somehow not caring about doing good or adopting anti-charity positions because it makes them sound cool to their radical friends.
And those people are wrong and lacking in good reasons for their point of view. (They’re also rare.)
You think that just because something is a truism, it has no implications? It contradicts your point of view, and you think it’s a truism with no implications? It tells us that we don’t need to play your game of overconfident subjective interpretations of the world in order to justify our actions.
But you gave a very narrow take where the “complexity of the issues” is actually reducing everything into a singular goal of implementing socialism. As I said already, you are picking one or two dimensions of the issue and ignoring others. You only talk about the kind of complexity that can further your point of view. That’s not illustrating complexity, it’s pretending that it doesn’t exist.
You are misquoting me. I did not provide this as a prior assumption. I don’t grow the EA movement because of some prior assumption, I grow it because everywhere I look it is epistemically and morally superior to its alternatives, and each project it pursues is high-leverage and valuable. The prior assumption is that, when something is aimed at EA goals, it probably helps achieve EA goals.
From your point of view, literally everyone is in the “same position” because you think that everyone’s point of view follows from subjective and controversial assumptions about the world. So sure, critics might be in the Same Position as EA, but only in the same banal and irrelevant sense that antivaxxers are in the Same Position as mainstream scientists, that holocaust deniers are in the Same Position as mainstream historiography, and so on for any dispute between right people and wrong people. But of course we can make judgments about these people: we can say that they are not rigorous, and that they are wrong, and that they are biased, and that they must stop doing harm to the world. So clearly something is missing from your framework. And whenever you identify that missing piece, it’s going to be the place where we stuff our criticisms (again, assuming that we are using your framework).
There’s something annoying about writing a whole paper that is essentially jockeying for status rather than arguing for any actual idea on the object level. Interesting that this is the pattern for leftists nowadays.