First off, thank you to everyone who worked on this post. Although I don’t agree with everything in it, I really admire the passion and dedication that went into this work—and I regret that the authors feel the need to remain anonymous for fear of adverse consequences.
For background: I consider myself a moderate EA reformer—I actually have a draft post I’ve been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don’t have a ton of the “Sam” characteristics, and the only thing of value I’ve accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I’ve made sure I’d never get hired if I wanted to leave my non-EA law career?).
Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate—and not more rapid/extensive—reform. I’m firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much in the post for anyone to fully react to without several days of thinking and writing time, so I’ll hit on a few high points.
1. I think it’s critical to look at EA activity in the broader context of other actors working in the same cause area.[1] I suggest that much of EA’s value in spaces where it is a low-percentage funder is in taking a distinctive approach that zeroes in on the blind spots of bigger fish. [2] In other words, an approach that maximizes diversity of perspective within EA may not maximize the effective diversity of perspective in the cause area as a whole. Also, the important question is not necessarily how good EA’s epistemic tools would work in a vacuum with no one else in a given space.
In spaces where EA is a niche player, I am concerned that movements in the direction of looking like other actors may well be counterproductive. In addition to GH&D, I believe that EA remains a relatively small player in animal advocacy and even in fields like pandemic prevention compared to the total amount of resources in those areas.
2. I feel that the proposal is holding EA up to a much higher standard at certain points than comparable movements. World Vision doesn’t (to my knowledge) host a message board where everyone goes to discuss various decisions its leadership has made. I doubt the Gates Foundation devotes a percentage of its spend on hiring opposition researchers. And most major charitable funders don’t crowdsource major funding decisions. Other than certain meta spending (which raises some conflict-of-interest flags for me), I don’t see anything that justifies making demands of EA unless one is simultaneously making demands of similar outfits.
Given that a large portion of the authors’ critique is about learning from others outside EA, I think that the lack of many of their proposed reforms in many similarly-sized, mature charitable movements is a significant data point. Although I believe in more process, consultation, and “bureaucracy” than I think the median EA does, I think there has to be a recognition that these things incur significant costs as well.
3. Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”). It seems unlikely that much of classic EA would be left after at least radical democratization—there are likely to be a flood of incoming people, many with prior commitments, attracted by the ability to vote on how to spend $500MM of Uncle (Open) Phil’s money every year. Whoever controls the voter registration function would ultimately control the money.
Now, I think DA is a very interesting idea, and if I had a magic wand to slice off a tiny slice of each Western charitable spend and route it to a DA movement, I think that would more likely than not be net positive. I’m just not clear on why EA should feel obliged to be the ashes from which DA arises—or why EA’s funders should feel obliged to fund DA while all the other big-money interests get to keep their handpicked people making their funding-allocation decisions.
4. As I noted in a comment elsewhere on this thread, I don’t think the community has much leverage over its funders. Unfortunately, it is much easier to come up with interesting ideas than people who want to and can fund them. Especially Grade-A funders—the proposal suggests a rejection, or at least minimization, of various classes of less-desirable donors.
As a recent post here reminds us, “[o]nly the young and the saints are uncompromised.” There’s rarely a realistic, easy way to get large sums of money for one’s cause without becoming compromised to some extent in the process. There’s the traditional way of cultivating an army of small/mid-size donors, but that takes many years, and you end up spending lots of resources and energy on care and feeding of the donors instead of on getting stuff done. I suspect most movements will spend a lot of time waiting to launch, and seeking funding, if they will only take Grade-A donor money. That’s a massive tradeoff—I really value my bednets! -- and it’s not one I am personally desirous of making.
One final, more broadly conciliatory point: EA can be too focused on what happens under the EA brand name and can seem relatively less interested in empowering people to do good effectively outside the brand. It doesn’t have a monopoly on either effectiveness or altruism, and I’ve questioned (without getting much in the way of upvotes) whether it makes sense to have a unified EA movement at this point.
I like the idea of providing different options for people where they can do good as effectively as possible in light of their unique skills, passions, and interests. For some people, that’s going to be classic GiveWell-style EA (my own likely best fit), for others it is going to be something like the current meta, for yet others it’s going to be something like what is in this proposal, and there are doubtless many other potential flavors I haven’t thought about. Some people in the community are happy with the status quo; some people are not. The ideal might be to have spaces where everyone would be locally happy and effective, rather than try to preserve or reform the entire ecosystem into something one personally likes (but isn’t conducive to others).
For example, in Global Health & Development, you have a number of NGOs [e.g., World Vision at $1.2B is several times EA’s entire spend on GH&D; see generally here for a list of big US charities] plus the truly big fish like the Gates Foundation [$6.7B, although not all GH&D] and various governments. So the vast majority of this money is being moved through democratic processes, Gates-type subject-matter experts, and traditional charities—not through EA.
Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”)
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, “Democratic Altruism” to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger “Democratic Altruism” would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.
(That said, I imagine any name should come from the group advocating this vision)
Yeah, I would love to see people go out and try this experiment and I like the tag “democratic altruism”. There’s a chance that if people with this vision were to have their own space, then these tensions might ultimately dissipate.
First off, thank you to everyone who worked on this post. Although I don’t agree with everything in it, I really admire the passion and dedication that went into this work—and I regret that the authors feel the need to remain anonymous for fear of adverse consequences.
For background: I consider myself a moderate EA reformer—I actually have a draft post I’ve been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don’t have a ton of the “Sam” characteristics, and the only thing of value I’ve accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I’ve made sure I’d never get hired if I wanted to leave my non-EA law career?).
Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate—and not more rapid/extensive—reform. I’m firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much in the post for anyone to fully react to without several days of thinking and writing time, so I’ll hit on a few high points.
1. I think it’s critical to look at EA activity in the broader context of other actors working in the same cause area.[1] I suggest that much of EA’s value in spaces where it is a low-percentage funder is in taking a distinctive approach that zeroes in on the blind spots of bigger fish. [2] In other words, an approach that maximizes diversity of perspective within EA may not maximize the effective diversity of perspective in the cause area as a whole. Also, the important question is not necessarily how good EA’s epistemic tools would work in a vacuum with no one else in a given space.
In spaces where EA is a niche player, I am concerned that movements in the direction of looking like other actors may well be counterproductive. In addition to GH&D, I believe that EA remains a relatively small player in animal advocacy and even in fields like pandemic prevention compared to the total amount of resources in those areas.
2. I feel that the proposal is holding EA up to a much higher standard at certain points than comparable movements. World Vision doesn’t (to my knowledge) host a message board where everyone goes to discuss various decisions its leadership has made. I doubt the Gates Foundation devotes a percentage of its spend on hiring opposition researchers. And most major charitable funders don’t crowdsource major funding decisions. Other than certain meta spending (which raises some conflict-of-interest flags for me), I don’t see anything that justifies making demands of EA unless one is simultaneously making demands of similar outfits.
Given that a large portion of the authors’ critique is about learning from others outside EA, I think that the lack of many of their proposed reforms in many similarly-sized, mature charitable movements is a significant data point. Although I believe in more process, consultation, and “bureaucracy” than I think the median EA does, I think there has to be a recognition that these things incur significant costs as well.
3. Portions of this reform package sound to my ears like the dismantling of EA and its replacement with a new movement, Democratic Altruism (“DA”). It seems unlikely that much of classic EA would be left after at least radical democratization—there are likely to be a flood of incoming people, many with prior commitments, attracted by the ability to vote on how to spend $500MM of Uncle (Open) Phil’s money every year. Whoever controls the voter registration function would ultimately control the money.
Now, I think DA is a very interesting idea, and if I had a magic wand to slice off a tiny slice of each Western charitable spend and route it to a DA movement, I think that would more likely than not be net positive. I’m just not clear on why EA should feel obliged to be the ashes from which DA arises—or why EA’s funders should feel obliged to fund DA while all the other big-money interests get to keep their handpicked people making their funding-allocation decisions.
4. As I noted in a comment elsewhere on this thread, I don’t think the community has much leverage over its funders. Unfortunately, it is much easier to come up with interesting ideas than people who want to and can fund them. Especially Grade-A funders—the proposal suggests a rejection, or at least minimization, of various classes of less-desirable donors.
As a recent post here reminds us, “[o]nly the young and the saints are uncompromised.” There’s rarely a realistic, easy way to get large sums of money for one’s cause without becoming compromised to some extent in the process. There’s the traditional way of cultivating an army of small/mid-size donors, but that takes many years, and you end up spending lots of resources and energy on care and feeding of the donors instead of on getting stuff done. I suspect most movements will spend a lot of time waiting to launch, and seeking funding, if they will only take Grade-A donor money. That’s a massive tradeoff—I really value my bednets! -- and it’s not one I am personally desirous of making.
One final, more broadly conciliatory point: EA can be too focused on what happens under the EA brand name and can seem relatively less interested in empowering people to do good effectively outside the brand. It doesn’t have a monopoly on either effectiveness or altruism, and I’ve questioned (without getting much in the way of upvotes) whether it makes sense to have a unified EA movement at this point.
I like the idea of providing different options for people where they can do good as effectively as possible in light of their unique skills, passions, and interests. For some people, that’s going to be classic GiveWell-style EA (my own likely best fit), for others it is going to be something like the current meta, for yet others it’s going to be something like what is in this proposal, and there are doubtless many other potential flavors I haven’t thought about. Some people in the community are happy with the status quo; some people are not. The ideal might be to have spaces where everyone would be locally happy and effective, rather than try to preserve or reform the entire ecosystem into something one personally likes (but isn’t conducive to others).
For example, in Global Health & Development, you have a number of NGOs [e.g., World Vision at $1.2B is several times EA’s entire spend on GH&D; see generally here for a list of big US charities] plus the truly big fish like the Gates Foundation [$6.7B, although not all GH&D] and various governments. So the vast majority of this money is being moved through democratic processes, Gates-type subject-matter experts, and traditional charities—not through EA.
It’s scandalous to me that some of the opportunities GiveWell has found were not quickly swallowed up by the big fish.
I like the choice to distill this into a specific cluster.
I think this full post definitely portrays a very different vision of EA than what we have, and than what I think many current EAs want. It seems like some particular cluster of this community might be in one camp, in favor of this vision.
If that were the case, I would also be interested in this being experimented with, by some cluster. Maybe even make a distinct tag, “Democratic Altruism” to help organize conversation on it. People in this camp might be most encouraged to directly try some of these proposals themselves.
I imagine there would be a lot of work to really put forward a strong idea of what a larger “Democratic Altruism” would look like, and also, there would be a lengthy debate on its strengths and weaknesses.
Right now I feel like I keep on seeing similar ideas here being argued again and again, without much organization.
(That said, I imagine any name should come from the group advocating this vision)
Yeah, I would love to see people go out and try this experiment and I like the tag “democratic altruism”. There’s a chance that if people with this vision were to have their own space, then these tensions might ultimately dissipate.