Disclaimer: My comment is less concise, pondered, and polished than I’d hope for under normal circumstances (for personal/family reasons) but I think the pros of substantive engagement outweigh the cons (and there hasn’t been a ton of substantive engagement).
In short:
I continue to essentially disagree with Leif’s criticisms of GiveWell, although some observations about moral credit are worth reiterating in my view;
I think there’s something to Leif’s arguments insofar as EA is viewed as something like a Grand Unified Theory of altruism, but they are less compelling against (my own) view of EA as something we do on the margins because we think the EA tools are underutilized in the altruistic world as a whole;
I generally agree that young EAs should devote more time and energy toward broadening experiences that promote what Leif terms “good judgment;” and
While endorsing the importance of asking whether one knows enough to be making recommendations, I would place more emphasis on the costs of inaction than I think the Open Letter does.
On Moral Credit and Counterfactuals
What I liked: I think the emphasis on counterfactuals in EA creates risks of giving donors too much moral credit / patting ourselves on the back too much for donating if we are not careful. As a (small time) donor, I get the privilege of being a counterfactually important part of a team that gets a bednet over a child’s head and (given enough bednets) prevents a fatal malaria infection. There’s a lot of credit to share, and it is not virtuous to claim it all for oneself. I think there is value to reiterating that to guard against people implicitly equating counterfactual impact with being the sole cause.
What I didn’t like: I don’t think a reasonable prospective donor reading GiveWell’s materials would get the impression that they were solely responsible for saving a child’s life by donating $5000 and that no one else gets moral credit for that. The counterfactual impact is important to convey; because someone donated ~$5,000 to bednets instead of to opera, a child in a developing country lived who wouldn’t have otherwise made it. There are ways to manipulate impact that mislead donors; I generally don’t see evidence of that here.[1] Finally, all charity involves working with a team; even if it were somehow possible to be a truly solo altruist it probably wouldn’t be pretty!
I often analogize to what I would expect out of a non-EA organization in an attempt to avoid both demands for extra rigor or giving the EA org a pass. If a university were pitching a donor to endow a new chair in the philosophy department, I wouldn’t expect them to remind me that my donation would be worthless but for (e.g.) the ancient Greeks, the institutions that trained the new hire, the university’s general counsel, and the students.
What would move the needle: I can’t get back into the headspace of a new GiveWell donor at this point. If someone ran a study of people new to GiveWell, and a significant number of people came away believing something like the donor was the sole cause of saving someone’s life, that would make me think that GiveWell was unreasonably communicating to donors here.
On Non-Exceptionalism
At least in global health, I think much of what EAs are doing is broadly in line with the broader global health movement. For instance, the IFRC explains:
These [insectide-treated bednets] are estimated to be responsible for two-thirds of the reduction in malaria cases over the past decades. Thanks to the efforts of national malaria programmes and partners, about 68% of households across sub-Saharan Africa own at least one net. Most of these nets have been bought via funds from The Global Fund to Fight AIDS, Tuberculosis and Malaria, the United States President’s Malaria Initiative, UNICEF and the Against Malaria Foundation (AMF).
If the list of actors who don’t know enough to be doing anything in this field include the Global Fund, the US Government, and UNICEF too, then the problem goes way deeper than EA!
Likewise, looking at Vitamin A supplementation, my first Google hit was Nutrition International, an organization whose work (in general) is sponsored by Gates, UNICEF, the governments of France, Australia, Canada, and the UK, and others.
So my looking for sources outside EA (as the Open Letter suggests) produced reasons to believe these are not things EAs are doing on half-baked EA theories. In contrast, I’ve read enough about the (copyrighted yet publicly available) Scientology scriptures to know that there are relatively few things in life that can be fairly compared to Scientology.
Grand Unified Theories vs. Plays on the Margin
I think there’s a difference between EA as something like a Grand Unified Theory (GUT) of altruism, or even a GUT of global health/wellbeing altruism, and EA on the margins (a belief that more of the charitable pie should be distributed in general accord with EA principles than is currently the case).
Certainly some philosophical writings tend to set EA up as a GUT, and so critiquing EA-as-GUT is a totally fair exercise. However, EA controls only a tiny fraction of even US charitable spend, and a small fraction of charitable spend on global health/wellbeing (especially compared to governments).
For an ordinary practitioner / donor in many cause areas, philosophical arguments about EA-as-GUT may be interesting but are of relatively little practical import.
Evaluating EA-as-something-like-a-GUT may be more important where a substantial fraction of money or other inputs coming into a field show strong EA influences (e.g., farmed animal welfare).
Moreover, while I think most social movements have a tendency to view themselves as GUTs on their topic of interest, it is probably even more likely for people in the EA community to think that math-y spreadsheets are a GUT. One can believe spreadsheets are important while also believing that there are more things under heaven and earth than are dreamt of in EA spreadsheets (and that having 100% of charitable spend flow from current EA methodology might not be such a grand thing!).
Therefore, while my view of EA / math-y spreadsheets is more favorable than the Open Letter’s, I think there is value in reiterating the problems with treating it as a GUT.
Baskets and Tools
Let me now offer a point that’s really important, especially if you’re in a STEM field and have never heard it before. There are many definitions of ‘rational action.’ EA is centered on one understanding of ‘rational action’ but there are lots of others.
So why believe that EA’s particular understanding of ‘rationality’—just one amongst many—is the best one? Why believe that thinking in terms of EV, marginality, and so on really is a way of always being smart, instead of a way of sometimes being dumb?
I think there may be something here but would put a somewhat different spin on it. Flaws are something I expect to see in the EA paradigm and in every other altruistic paradigm. I suggest that putting all of the world’s altruistic eggs into one basket would be an inherently perilous exercise, and so the question is more “are we putting too many / too few eggs in this altruistic paradigm’s basket”?
There’s a tension here—on the one hand, I think we want to have specialist communities rather than having everyone who seeks to do altruism with a shallow knowledge of how to use three dozen types of tools. I want my urologist to have taken a deep dive into urology, but I also want him to know enough about other ways of practicing medicine to know when they don’t have the right tools for the job. They also need to know enough about other ways of practicing medicine to be a good urologist.
Another analogy: Hammers are important tools, but not every problem is a nail. Moreover, other tools are often necessary to take action on a nail. I do think there is a tendency for EA (and probably most other altruist movements) to see ambiguous hardware as nails. Someone with a toolbox of almost exclusively hammers is not going to be as useful in the world as someone who has at least a basic appreciation for and ability to use other major hand tools.
Technicians
In American medicine, we have physicians who have very broad general training plus years of onerous training in their specialty, as well as “midlevels” (like nurse practitioners) who have much less education and training. From what I hear,[2] midlevels can be very good when operating in their comfort zone (especially if narrowly specialized) but often lack the ability to respond well to curveballs. I do worry that the current recommended “education” of EAs veers too much toward creating narrow technicians rather than the equivalent of residency or even fellowship trained physicians. Cf. the Open Letter’s discussion of “good judgment.”
To be clear, midlevels have an important role to play in the US healthcare system; producing residency-trained physicians is extremely expensive. I didn’t need an MD when I went in this week for a suspected sinus infection. The problems often come in when midlevels forget that they are midlevels and lack the broad training and experience of residency-trained physicians, so they take on workloads beyond their capacities.
I would guess that, in this analogy, Leif thinks most EA leaders are midlevels who are think they are fellowship-trained physicians. This response is already long, so I’m not going to evaluate that possibility further here.
Finally, I do think that both EA and broader Western society tend to undervalue diverse life experience. I also think the tech industry as an exemplar has some of the pitfalls that the Open Letter alludes to. I would generally endorse Leif’s recommendations about good judgment insofar as they are not particularly costly. If I knew that someone had been preselected to be a philosopher-king, I’d be more inclined to endorse the more costly parts too. But I feel a bit of unease with globally recommending that young people pour a lot of time and energy into stuff that the broader society doesn’t seem to value that much.
The Costs of Inaction
This is a critical flaw in EA leadership and in the culture they’ve created. The crucial-but-absent Socratic meta-question is, ‘Do I know enough about what I’m talking about to make recommendations that will be high stakes for other people’s lives?’
This is an important question, but I don’t think it can be asked without considering other actors and the costs of inaction. If we could enforce a global norm that no one should be doing anything in AI without being able to answer this question in the affirmative (and have their answer verified by disinterested outside experts), that would be simply amazing. In fact, one can view the idea of AI policy as directed significantly toward that goal.
One problem is that other actors aren’t asking this question. Tyson Foods (the meat conglomerate) is going to do factory-farm-conglomerate things like their executives’ paychecks depend on it (and they do). Even laying aside EA-type AI safety worries completely, the financial interests of big AI companies and their employees are a powerful incentive to moving fast even though they don’t know nearly enough about the civilization disrupting and defining functions AGI would bring. So recommendations and actions will happen whether we are making them or not.
EA often works in neglected fields, and I think that can play into the calculus as well. A recommendation based on sketchy information isn’t necessarily inappropriate if that’s the information that is on the table.
So I would phrase the question differently: In light of both the possibilities and opportunity costs of gaining better knowledge, and the effects of other actors who would occupy the field in my absence, do I know enough about what I’m talking about to make high-stakes recommendations?
For instance, touting impact on an organization’s most effective program even though it can only absorb a portion of the funding obtained as a result of the appeal.
Disclaimer: My comment is less concise, pondered, and polished than I’d hope for under normal circumstances (for personal/family reasons) but I think the pros of substantive engagement outweigh the cons (and there hasn’t been a ton of substantive engagement).
In short:
I continue to essentially disagree with Leif’s criticisms of GiveWell, although some observations about moral credit are worth reiterating in my view;
I think there’s something to Leif’s arguments insofar as EA is viewed as something like a Grand Unified Theory of altruism, but they are less compelling against (my own) view of EA as something we do on the margins because we think the EA tools are underutilized in the altruistic world as a whole;
I generally agree that young EAs should devote more time and energy toward broadening experiences that promote what Leif terms “good judgment;” and
While endorsing the importance of asking whether one knows enough to be making recommendations, I would place more emphasis on the costs of inaction than I think the Open Letter does.
On Moral Credit and Counterfactuals
What I liked: I think the emphasis on counterfactuals in EA creates risks of giving donors too much moral credit / patting ourselves on the back too much for donating if we are not careful. As a (small time) donor, I get the privilege of being a counterfactually important part of a team that gets a bednet over a child’s head and (given enough bednets) prevents a fatal malaria infection. There’s a lot of credit to share, and it is not virtuous to claim it all for oneself. I think there is value to reiterating that to guard against people implicitly equating counterfactual impact with being the sole cause.
What I didn’t like: I don’t think a reasonable prospective donor reading GiveWell’s materials would get the impression that they were solely responsible for saving a child’s life by donating $5000 and that no one else gets moral credit for that. The counterfactual impact is important to convey; because someone donated ~$5,000 to bednets instead of to opera, a child in a developing country lived who wouldn’t have otherwise made it. There are ways to manipulate impact that mislead donors; I generally don’t see evidence of that here.[1] Finally, all charity involves working with a team; even if it were somehow possible to be a truly solo altruist it probably wouldn’t be pretty!
I often analogize to what I would expect out of a non-EA organization in an attempt to avoid both demands for extra rigor or giving the EA org a pass. If a university were pitching a donor to endow a new chair in the philosophy department, I wouldn’t expect them to remind me that my donation would be worthless but for (e.g.) the ancient Greeks, the institutions that trained the new hire, the university’s general counsel, and the students.
What would move the needle: I can’t get back into the headspace of a new GiveWell donor at this point. If someone ran a study of people new to GiveWell, and a significant number of people came away believing something like the donor was the sole cause of saving someone’s life, that would make me think that GiveWell was unreasonably communicating to donors here.
On Non-Exceptionalism
At least in global health, I think much of what EAs are doing is broadly in line with the broader global health movement. For instance, the IFRC explains:
These [insectide-treated bednets] are estimated to be responsible for two-thirds of the reduction in malaria cases over the past decades. Thanks to the efforts of national malaria programmes and partners, about 68% of households across sub-Saharan Africa own at least one net. Most of these nets have been bought via funds from The Global Fund to Fight AIDS, Tuberculosis and Malaria, the United States President’s Malaria Initiative, UNICEF and the Against Malaria Foundation (AMF).
If the list of actors who don’t know enough to be doing anything in this field include the Global Fund, the US Government, and UNICEF too, then the problem goes way deeper than EA!
Likewise, looking at Vitamin A supplementation, my first Google hit was Nutrition International, an organization whose work (in general) is sponsored by Gates, UNICEF, the governments of France, Australia, Canada, and the UK, and others.
So my looking for sources outside EA (as the Open Letter suggests) produced reasons to believe these are not things EAs are doing on half-baked EA theories. In contrast, I’ve read enough about the (copyrighted yet publicly available) Scientology scriptures to know that there are relatively few things in life that can be fairly compared to Scientology.
Grand Unified Theories vs. Plays on the Margin
I think there’s a difference between EA as something like a Grand Unified Theory (GUT) of altruism, or even a GUT of global health/wellbeing altruism, and EA on the margins (a belief that more of the charitable pie should be distributed in general accord with EA principles than is currently the case).
Certainly some philosophical writings tend to set EA up as a GUT, and so critiquing EA-as-GUT is a totally fair exercise. However, EA controls only a tiny fraction of even US charitable spend, and a small fraction of charitable spend on global health/wellbeing (especially compared to governments).
For an ordinary practitioner / donor in many cause areas, philosophical arguments about EA-as-GUT may be interesting but are of relatively little practical import.
Evaluating EA-as-something-like-a-GUT may be more important where a substantial fraction of money or other inputs coming into a field show strong EA influences (e.g., farmed animal welfare).
Moreover, while I think most social movements have a tendency to view themselves as GUTs on their topic of interest, it is probably even more likely for people in the EA community to think that math-y spreadsheets are a GUT. One can believe spreadsheets are important while also believing that there are more things under heaven and earth than are dreamt of in EA spreadsheets (and that having 100% of charitable spend flow from current EA methodology might not be such a grand thing!).
Therefore, while my view of EA / math-y spreadsheets is more favorable than the Open Letter’s, I think there is value in reiterating the problems with treating it as a GUT.
Baskets and Tools
I think there may be something here but would put a somewhat different spin on it. Flaws are something I expect to see in the EA paradigm and in every other altruistic paradigm. I suggest that putting all of the world’s altruistic eggs into one basket would be an inherently perilous exercise, and so the question is more “are we putting too many / too few eggs in this altruistic paradigm’s basket”?
There’s a tension here—on the one hand, I think we want to have specialist communities rather than having everyone who seeks to do altruism with a shallow knowledge of how to use three dozen types of tools. I want my urologist to have taken a deep dive into urology, but I also want him to know enough about other ways of practicing medicine to know when they don’t have the right tools for the job. They also need to know enough about other ways of practicing medicine to be a good urologist.
Another analogy: Hammers are important tools, but not every problem is a nail. Moreover, other tools are often necessary to take action on a nail. I do think there is a tendency for EA (and probably most other altruist movements) to see ambiguous hardware as nails. Someone with a toolbox of almost exclusively hammers is not going to be as useful in the world as someone who has at least a basic appreciation for and ability to use other major hand tools.
Technicians
In American medicine, we have physicians who have very broad general training plus years of onerous training in their specialty, as well as “midlevels” (like nurse practitioners) who have much less education and training. From what I hear,[2] midlevels can be very good when operating in their comfort zone (especially if narrowly specialized) but often lack the ability to respond well to curveballs. I do worry that the current recommended “education” of EAs veers too much toward creating narrow technicians rather than the equivalent of residency or even fellowship trained physicians. Cf. the Open Letter’s discussion of “good judgment.”
To be clear, midlevels have an important role to play in the US healthcare system; producing residency-trained physicians is extremely expensive. I didn’t need an MD when I went in this week for a suspected sinus infection. The problems often come in when midlevels forget that they are midlevels and lack the broad training and experience of residency-trained physicians, so they take on workloads beyond their capacities.
I would guess that, in this analogy, Leif thinks most EA leaders are midlevels who are think they are fellowship-trained physicians. This response is already long, so I’m not going to evaluate that possibility further here.
Finally, I do think that both EA and broader Western society tend to undervalue diverse life experience. I also think the tech industry as an exemplar has some of the pitfalls that the Open Letter alludes to. I would generally endorse Leif’s recommendations about good judgment insofar as they are not particularly costly. If I knew that someone had been preselected to be a philosopher-king, I’d be more inclined to endorse the more costly parts too. But I feel a bit of unease with globally recommending that young people pour a lot of time and energy into stuff that the broader society doesn’t seem to value that much.
The Costs of Inaction
This is an important question, but I don’t think it can be asked without considering other actors and the costs of inaction. If we could enforce a global norm that no one should be doing anything in AI without being able to answer this question in the affirmative (and have their answer verified by disinterested outside experts), that would be simply amazing. In fact, one can view the idea of AI policy as directed significantly toward that goal.
One problem is that other actors aren’t asking this question. Tyson Foods (the meat conglomerate) is going to do factory-farm-conglomerate things like their executives’ paychecks depend on it (and they do). Even laying aside EA-type AI safety worries completely, the financial interests of big AI companies and their employees are a powerful incentive to moving fast even though they don’t know nearly enough about the civilization disrupting and defining functions AGI would bring. So recommendations and actions will happen whether we are making them or not.
EA often works in neglected fields, and I think that can play into the calculus as well. A recommendation based on sketchy information isn’t necessarily inappropriate if that’s the information that is on the table.
So I would phrase the question differently: In light of both the possibilities and opportunity costs of gaining better knowledge, and the effects of other actors who would occupy the field in my absence, do I know enough about what I’m talking about to make high-stakes recommendations?
For instance, touting impact on an organization’s most effective program even though it can only absorb a portion of the funding obtained as a result of the appeal.
This is a simplified example, and I’ve heard about this mostly from the physician standpoint rather than the midlevel one.