I feel similarly to Jason and JWS. I don’t disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Thanks, that is interesting and feels like it has conversational hooks I haven’t heard before.
What would it mean to say Open Phil was justified or not justified in being the de facto head of the community? I assume you mean morally justified, since it seems pretty logical on a practical level.
Supposing a large enough contingent of EA decided it was not justified; what then? I don’t think anyone is turning down funding for the hell of it, so giving up open phil money would require a major restructuring. What does that look like? Who drives it? What constitutes large enough?
Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
As more critical attention is turned towards OP among EAs, I expect deference will reduce further. E.g. some of David Thorstad’s critical writings have been cited on this forum on this.
I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
Hard power
I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
In other words, I’m not convinced it would require restructuring or just structuring.
A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
Of course this depends on what one’s vision for the EA community is.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
The math suggests that the meta would look much different in this world. CEA’s proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing—I don’t have a firm opinion on that.
To make sure I understand; this is an answer to “what should EA do if it decides OpenPhil’s power isn’t justified?” And the answer is “defer less, and build a grassroots community structure?”
I’m not sure what distinction you’re pointing at with structure vs. restructure. They both take money that would have to come from somewhere (although we can debate how much money). Maybe you mean OP wouldn’t actively oppose this effort?
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason’s comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
I think we have a lot of agreement in what we want. I want more community infrastructure to exist, recruiting to be labeled as recruiting, and more people figuring out what they think is right rather than deferring to authorities.
I don’t think any of these need to wait on proving open phil’s power is unjustified. People can just want to do them, and then do them. The cloud of deference might make that harder[1], but I don’t think arguing about the castle from a position of entitlement makes things better. I think it’s more likely to make things worse.
Acting as if every EA has standing to direct open phil’s money reifies two things I’d rather see weakened. First it reinforces open phil’s power, and promotes deference to it (because arguing with someone implies their approval is necessary). But worse, it reinforces the idea that the deciding body is the EA cloud, and not particular people making their own decisions to do particular things[2]. If open phil doesn’t get to make its own choices without community ratification, who does?
I remember reading a post about a graveyard of projects CEA had sniped from other people and then abandoned. I can’t find that post and it’s a serious accusation so I don’t want to make it without evidence, but if it is true, I consider it an extremely serious problem and betrayal of trust.
narrow is meant to be neutral to positive here. No event can be everything to all people, I think it’s great they made an explicit decision on trade-offs. They maybe could have marketed it more accurately. They’re moving that way now and I wish it had gone farther earlier. But I think even perfectly accurate marketing would have left a lot of people unhappy.
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
Maybe you two might consider having this discussion using the new Dialogue feature? I’ve really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.
So in this hypothetical, certain functions transfer to the fee-paying society, and certain functions remain funded by OP. That makes sense, although I think the range of what the fee-paying society can do on fees alone may be relatively small. If we estimate 2,140 full fee-payers at $200 each and 1,428 students at $50 each, that’s south of $500K. You’d need a diverse group of EtGers willing to put up $5K-$25K each for this to work, I suspect. I’m not opposed; in fact, my first main post on the Forum was in part about the need for the community to secure independent funding for certain epistemically critical functions. I just want to see people who advocate for a fee-paying society to bite the bullet of how much revenue fees could generate and what functions could be sustained on that revenue. It sounds like you are willing to do so.
But looping back to your main point about “huge amounts of soft and hard power over the EA community” held by OP, how much would change in this hypothetical? OP still funds the bulk of EA, still pays for the “recruitment funnel,” pays the community builders, and sponsors the conferences. I don’t think characterizing the bulk of what CEA et al. do as a “recruitment funnel” for the longtermist ecosystem renders those functions less important as sources of hard and soft power. OP would still be spending ~ $20-$30MM on meta versus perhaps ~ $1-2MM for the fee-paying society.
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth’s points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP’s donations.
I personally would love to see a big-tent organization like the one you describe! I think it less-than-likely that the existence of such an organization would have made most of the people who were “so invested in the castle” significantly less so. But there’s no way to test that. I agree that a big-tent organization would bring in other people—not currently involved in EA—who would be unlikely to care much about the castle.
if you have the energy, I’d love to hear your disagreement on open phil or ownership of money.
I feel similarly to Jason and JWS. I don’t disagree with any of the literal statements you made but I think the frame is really off. Perhaps OP benefits from this frame, but I probably disagree with that too.
Another frame: OP has huge amounts of soft and hard power over the EA community. In some ways, it is the de facto head of the EA community. Is this justified? How effective is it? How do they react to requests for information about questionable grants that have predictably negative impacts on the wider EA community? What steps do they take to guard against motivated reasoning when doing things that look like stereotypical examples of motivated reasoning? There are many people who have a stake in these questions.
Thanks, that is interesting and feels like it has conversational hooks I haven’t heard before.
What would it mean to say Open Phil was justified or not justified in being the de facto head of the community? I assume you mean morally justified, since it seems pretty logical on a practical level.
Supposing a large enough contingent of EA decided it was not justified; what then? I don’t think anyone is turning down funding for the hell of it, so giving up open phil money would require a major restructuring. What does that look like? Who drives it? What constitutes large enough?
Briefly in terms of soft and hard power:
Soft power
Deferring to OP
Example comment about how much some EAs defer to OP even when they know it’s bad reasoning.
OP’s epistemics are seen as the best in EA and jobs there are the most desirable.
The recent thread about OP allocating most of its neartermist budget to FAW and especially its comments shows much reduced deference (or at least more openly taking such positions) among some EAs.
As more critical attention is turned towards OP among EAs, I expect deference will reduce further. E.g. some of David Thorstad’s critical writings have been cited on this forum on this.
I expect this will continue happening organically, particularly in response to failures and scandals, and the castle played a role in reduced deference.
Hard power
I agree no one is turning down money willy-nilly, but if we ignore labels, how much OP money and effort actually goes into governance and health for the EA community, rather than recruitment for longtermist jobs?
In other words, I’m not convinced it would require restructuring or just structuring.
A couple of EAs I spoke to about reforms both talked about how huge sums of money are needed to restructure the community and it’s effectively impossible without a megadonor. I didn’t understand where they were coming from. Building and managing a community doesn’t take big sums of money and EA is much richer than most movements and groups.
Why can’t EAs set up a fee-paying society? People could pay annual membership fees and in exchange be part of a body that provided advice for donations, news about popular cause areas and the EA community, a forum, annual meetings, etc. Leadership positions could be decided by elections. I’m just spitballing here.
Of course this depends on what one’s vision for the EA community is.
What do you think?
The math suggests that the meta would look much different in this world. CEA’s proposed budget for 2024 is $31.4MM by itself, about half for events (mostly EAG), about a quarter for groups. There are of course other parts of the meta. There were 3567 respondents to the EA Survey 2022, which could be an overcount or undercount of the number of people who might join a fee-paying society. Only about 60% were full-time employed or self-employed; most of the remainder were students.
Maybe a leaner, more democratic meta would be a good thing—I don’t have a firm opinion on that.
To make sure I understand; this is an answer to “what should EA do if it decides OpenPhil’s power isn’t justified?” And the answer is “defer less, and build a grassroots community structure?”
I’m not sure what distinction you’re pointing at with structure vs. restructure. They both take money that would have to come from somewhere (although we can debate how much money). Maybe you mean OP wouldn’t actively oppose this effort?
To the first: Yup, it’s one answer. I’m interested to hear other ideas too.
Structure vs restructuring: My point was that a lot of the existing community infrastructure OP funds is mislabelled and is closer to a deep recruitment funnel for longtermist jobs rather than infrastructure for the EA community in general. So for the EA community to move away from OP infrastructure wouldn’t require relinquishing as much infrastructure as the labels might suggest.
For example, and this speaks to @Jason’s comment, the Center for Effective Altruism is primarily funded by the OP longtermist team to (as far as I can tell) expand and protect the longtermist ecosystem. It acts and prioritizes accordingly. It is closer to a longtermist talent recruitment agency than a center for effective altruism. EA Globals (impact often measured in connections) are closer to longtermist job career fairs than a global meeting of effective altruists. CEA groups prioritize recruiting people who might apply for and get OP longtermist funding (“highly engaged EAs”).
I think we have a lot of agreement in what we want. I want more community infrastructure to exist, recruiting to be labeled as recruiting, and more people figuring out what they think is right rather than deferring to authorities.
I don’t think any of these need to wait on proving open phil’s power is unjustified. People can just want to do them, and then do them. The cloud of deference might make that harder[1], but I don’t think arguing about the castle from a position of entitlement makes things better. I think it’s more likely to make things worse.
Acting as if every EA has standing to direct open phil’s money reifies two things I’d rather see weakened. First it reinforces open phil’s power, and promotes deference to it (because arguing with someone implies their approval is necessary). But worse, it reinforces the idea that the deciding body is the EA cloud, and not particular people making their own decisions to do particular things[2]. If open phil doesn’t get to make its own choices without community ratification, who does?
I remember reading a post about a graveyard of projects CEA had sniped from other people and then abandoned. I can’t find that post and it’s a serious accusation so I don’t want to make it without evidence, but if it is true, I consider it an extremely serious problem and betrayal of trust.
yes, everyone has standing to object to negative externalities
narrow is meant to be neutral to positive here. No event can be everything to all people, I think it’s great they made an explicit decision on trade-offs. They maybe could have marketed it more accurately. They’re moving that way now and I wish it had gone farther earlier. But I think even perfectly accurate marketing would have left a lot of people unhappy.
Maybe some people argued from a position of entitlement. I skimmed the comments you linked above and I did not see any entitlement. Perhaps you could point out more specifically what you felt was entitled, although a few comments arguing from entitlement would only move me a little so this may not be worth pursuing.
The bigger disagreement I suspect is between what we think the point of EA and the EA community is. You wrote that you want it to be a weird do-ocracy. Would you like to expand on that?
Maybe you two might consider having this discussion using the new Dialogue feature? I’ve really appreciated both of your perspectives and insights on this discussion, and I think the collaborative back-and-forth your having seems a very good fit for how Dialogues work.
That’s helpful.
So in this hypothetical, certain functions transfer to the fee-paying society, and certain functions remain funded by OP. That makes sense, although I think the range of what the fee-paying society can do on fees alone may be relatively small. If we estimate 2,140 full fee-payers at $200 each and 1,428 students at $50 each, that’s south of $500K. You’d need a diverse group of EtGers willing to put up $5K-$25K each for this to work, I suspect. I’m not opposed; in fact, my first main post on the Forum was in part about the need for the community to secure independent funding for certain epistemically critical functions. I just want to see people who advocate for a fee-paying society to bite the bullet of how much revenue fees could generate and what functions could be sustained on that revenue. It sounds like you are willing to do so.
But looping back to your main point about “huge amounts of soft and hard power over the EA community” held by OP, how much would change in this hypothetical? OP still funds the bulk of EA, still pays for the “recruitment funnel,” pays the community builders, and sponsors the conferences. I don’t think characterizing the bulk of what CEA et al. do as a “recruitment funnel” for the longtermist ecosystem renders those functions less important as sources of hard and soft power. OP would still be spending ~ $20-$30MM on meta versus perhaps ~ $1-2MM for the fee-paying society.
OP and most current EA community work takes a “Narrow EA” approach. The theory of change is that OP and EA leaders have neglected ideas and need to recruit elites to enact these ideas. Buying castles and funding expensive recruitment funnels is consistent with this strategy.
I am talking about something closer to a big tent EA approach. One vision could be to help small and medium donors in rich countries spend more money more effectively on philanthropy, with a distinctive emphasis on cause neutrality and cause prioritization. This can and probably should be started in a grassroots fashion with little money. Spending millions on fancy conferences and paying undergraduate community builders might be counter to the spirit and goals of this approach.
A fee-paying society is a natural fit for big tent EA and not for narrow EA.
I didn’t know that the huge amounts of power held by OP was my main point! I was trying to use that to explain why EA community members were so invested in the castle. I’m not sure I succeeded, especially since I agree with @Elizabeth’s points that no one needs to wait for permission from OP or anyone else to pursue what they think is right, and the EA community cannot direct OP’s donations.
I personally would love to see a big-tent organization like the one you describe! I think it less-than-likely that the existence of such an organization would have made most of the people who were “so invested in the castle” significantly less so. But there’s no way to test that. I agree that a big-tent organization would bring in other people—not currently involved in EA—who would be unlikely to care much about the castle.