Thanks for the time you’ve put into trying to improve EA, and it’s unfortunate that you feel the need to do so anonymously!
Below are some reactions, focused on points that you highlighted to me over email as sections you’d particularly appreciate my thoughts on.
On anonymity—as a funder, we need to make judgments about potential grantees, but want to do so in a way that doesn’t create perverse incentives. This section of an old Forum post summarizes how I try to reconcile these goals, and how I encourage others to. When evaluating potential grantees, we try to focus on what they’ve accomplished and what they’re proposing, without penalizing them for holding beliefs we don’t agree with.
I understand that it’s hard to trust someone to operate this way and not hold your beliefs against you; generally, if one wants to do work that’s only a fit for one source of funds (even if those funds run through a variety of mechanisms!), I’m (regretfully) sympathetic to feeling like the situation is quite fragile and calls for a lot of carefulness.
That said, for whatever it’s worth, I believe this sort of thing shouldn’t be a major concern w/r/t Open Philanthropy funding; “lack of output or proposals that fit our goals” seems like a much more likely reason not to be funded than “expressed opinions we disagree with.”
On conflicts of interest: with a relatively small number of people interested in EA overall, it doesn’t feel particularly surprising to me that there are a relatively small number of particularly prominent folks who are or have been involved in multiple of the top organizations. More specifically:
Since Open Philanthropy funds a large % of the orgs focused on our priority issues, it doesn’t seem surprising or concerning that many of the people who’ve spent some time working for Open Philanthropy have also spent some time working for Open Philanthropy grantees. I think it is generally common for funders to hire people who previously worked at their grantees, and in turn for ex-employees of funders to leave for jobs at grantees.
It doesn’t seem surprising or concerning that people who have written prominent books on EA-connected ideas have also helped build community infrastructure organizations such as the Centre for Effective Altruism.
To be clear, I think it’s important for conflicts of interest to be disclosed and handled appropriately, and there are some conflicts of interest that concern me for sure—I don’t at all mean to minimize the importance of conflicts of interest or potential concerns around them. I still thought it was worth sharing those reactions to the specific takes given in that section of the post.
On our concentration on a couple of existential risks: here I think we disagree. OP works on a wide variety of causes, but I don’t think we should be diversifying more than we are *within* existential risk given our picture of the size and neglectedness of the different risks.
On being in line with the interests of billionaires: I understand the misgivings that people have about EA being so reliant on a small number of funders, and address that point below. And I understand skepticism that funders who have made their wealth in the technology industry have only global impact in mind when they focus their philanthropy on technology issues. For what it’s worth, in the case of the particular billionaires I know best, Cari and Dustin were pretty emotionally reluctant to work on x-risks (and I was as well) - this felt at least to me like a case of them reluctantly concluding that these are important issues rather than coming in with pet causes.
On centralization of funding: I’m having trouble operationalizing the calls for less centralization of funding decision-making, which seems to be the main driver of much of your concerns. I agree that heavy concentration of funding for a given area brings some concerns that would be reduced if the same amount of funding were more spread out among funders; but I haven’t seen an alternative funding mechanism proposed that seems terribly promising.
I was broadly in sync with Dustin’s thoughts here, though not saying I’d endorse every word. I don’t see a good way to define the “members” of EA without keeping a lot of judgment/discretion over what counts (and thus keeping the concentration of power around), or eroding the line between EA and the broader world with its very different priorities. To me, it looks like EA is fundamentally a self-applied label for a bunch of individuals making decisions using an intellectual framework that’s both unusual and highly judgment-laden; I think there are good and bad things about this, but I haven’t seen a way to translate it into more systematic or democratic formal structures without losing those qualities.
I’m not confident here and don’t pretend to have thought it fully through. I remain interested in suggestions for approaches to spending Cari’s and Dustin’s capital that could improve how it’s spent—the more specific and mechanistic, the better.
What’s best for spending Cari and Dustin’s financial capital may not be what’s best for the human community made up of EAs. One could even argue that the human capital in the EA community is roughly on par with or even exceeds the value of Good Ventures’ capital. Just something to think about.
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?
I’m wondering about any overlap between your concerns and the OP’s.
I’d be glad for an answer or just a link to something written, if you have time.
Hi Holden, thanks for writing this up, but would it be possible for you to say something with a little bit more substance? At present it seems rather perfunctory and potentially a little insulting.
I’ve attempted to translate the comment above into a series of plain-English bottom-lines.
I apologise if the tone is a little forthright: a trade-off with clarity and intellectual honesty.
On anonymity
“Yeah I can see why there might seem to be a problem, and I promise that I am truly very sorry that you’re facing its consequences. In any case, I promise that everything is actually completely fine and you don’t need to worry! I acknowledge that (as you have already said) my promises don’t count for much here, but… trust me anyway! No, I will not take any notice of the specific issues you describe, nor the specific solutions that you propose.”
On conflicts of interest
“Here I will briefly describe some of the original causes of the problem. I personally think that it’s no big deal, and will not engage with any of the arguments or examples you provide. I promise we’re taking it really seriously, though.”
On focusing on a couple of existential risks (which is a gross simplification of the section I presume you’re responding to?)
“I personally think everything is fine, no I will not engage with any of the arguments or examples you provide.”
On being in line with the interests of billionaires
“I understand your concerns, but most of our tech billionaire donors changed their minds to fit the techno-political culture of Silicon Valley rather than starting off that way, and thus all incentive structures and cultural factors are completely irrelevant.”
On centralization of funding
“I perfunctorily agree that there is a problem, but I’m having trouble operationalizing the operational proposals you made. I will provide no specifics. I think membership-demarcation may be a problem, and will ignore your proposals for solving it.”
“By the way, would you mind doing even more unpaid work to flesh out specific mechanistic proposals, even though I, the person with the power to implement such proposals, just completely ignored them all in the sections I responded to?”
Despite my pre-existing intellectual respect for you, Holden, I really can’t escape reading this as a somewhat-more-socially-competent version of Buck’s response:
“We bosses know what we’re doing, you’re welcome to disagree if you want, but if you want to be listened to you need to do a bunch of unpaid work that we will probably completely ignore, and we most likely won’t listen to you at all anyway.”
This is what power does to your brain: you are only able to countenance posting empty EA-ified PR-speak like this because you are accountable only to a few personal friends that basically agree with you, and can thus get away with more or less ignoring external inputs.
For example, in American situation comedies of the 1950s, there was a constant staple: jokes about the impossibility of understanding women. The jokes (told, of course, by men) always represented women’s logic as fundamentally alien and incomprehensible. “You have to love them,” the message always seemed to run, “but who can really understand how these creatures think?” One never had the impression the women in question had any trouble understanding men. The reason is obvious. Women had no choice but to understand men. In America, the fifties were the heyday of a certain ideal of the one-income patriarchal family, and among the more affluent, the ideal was often achieved. Women with no access to their own income or resources obviously had no choice but to spend a great deal of time and energy understanding what their menfolk thought was going on.
This kind of rhetoric about the mysteries of womankind appears to be a perennial feature of such patriarchal arrangements. It is usually paired with a sense that, though illogical and inexplicable, women still have access to mysterious, almost mystical wisdom (“women’s intuition”) unavailable to men. And of course something like this happens in any relation of extreme inequality: peasants, for example, are always represented as being both oafishly simple, but somehow, also, mystically wise. Generations of women novelists—Virginia Woolf comes most immediately to mind (To the Lighthouse)—have documented the other side of such arrangements: the constant efforts women end up having to expend in managing, maintaining, and adjusting the egos of oblivious and self-important men, involving the continual work of imaginative identification, or interpretive labor. This work carries over on every level. Women everywhere are always expected to continually imagine what one situation or another would look like from a male point of view. Men are almost never expected to do the same for women.
Overwhelmingly one-sided social arrangements breed stupidity: by being in a position where you’re powerful enough to ignore people with other points of view, you become extremely bad at understanding them.
Thus, the oblivious bosses (egged on by mixed teams of true sycophants and power/money-seeking yes-men) continue doing whatever they want to do, and an invisible army of exhausted, exasperated, and powerless subordinates scramble to semi-successfully translate the whims of the bosses into bureaucratic justifications for doing the things that actually need to be done.
The bosses can always cook up some justification for why them being in charge is always the best way forward, and either never hear critiques because critics fear for their careers or, as seen here, lazily dismiss them without consequence.
Speaking as someone with a little experience in similar organisations and movements to this one that slowly lost their principles as they calcified into self-serving bureaucracies:
We have warned A.C. Skraeling before about their behavior. “I will rephrase your statement as (insulting thing the person clearly didn’t say)” violates our norms. We are therefore issuing them a one-month ban.
Looping back some months later, FWIW while I disagree with most of the rest of the comment (and can see a case for a ban as a result of them), I quite appreciate the point about “interpretive labor”, and I’ve found it an interesting/useful conceptual handle in my toolkit since reading it.
(This is a high bar as most EA Forum comments do not update me nearly as much).
Thanks for the time you’ve put into trying to improve EA, and it’s unfortunate that you feel the need to do so anonymously!
Below are some reactions, focused on points that you highlighted to me over email as sections you’d particularly appreciate my thoughts on.
On anonymity—as a funder, we need to make judgments about potential grantees, but want to do so in a way that doesn’t create perverse incentives. This section of an old Forum post summarizes how I try to reconcile these goals, and how I encourage others to. When evaluating potential grantees, we try to focus on what they’ve accomplished and what they’re proposing, without penalizing them for holding beliefs we don’t agree with.
I understand that it’s hard to trust someone to operate this way and not hold your beliefs against you; generally, if one wants to do work that’s only a fit for one source of funds (even if those funds run through a variety of mechanisms!), I’m (regretfully) sympathetic to feeling like the situation is quite fragile and calls for a lot of carefulness.
That said, for whatever it’s worth, I believe this sort of thing shouldn’t be a major concern w/r/t Open Philanthropy funding; “lack of output or proposals that fit our goals” seems like a much more likely reason not to be funded than “expressed opinions we disagree with.”
On conflicts of interest: with a relatively small number of people interested in EA overall, it doesn’t feel particularly surprising to me that there are a relatively small number of particularly prominent folks who are or have been involved in multiple of the top organizations. More specifically:
Since Open Philanthropy funds a large % of the orgs focused on our priority issues, it doesn’t seem surprising or concerning that many of the people who’ve spent some time working for Open Philanthropy have also spent some time working for Open Philanthropy grantees. I think it is generally common for funders to hire people who previously worked at their grantees, and in turn for ex-employees of funders to leave for jobs at grantees.
It doesn’t seem surprising or concerning that people who have written prominent books on EA-connected ideas have also helped build community infrastructure organizations such as the Centre for Effective Altruism.
To be clear, I think it’s important for conflicts of interest to be disclosed and handled appropriately, and there are some conflicts of interest that concern me for sure—I don’t at all mean to minimize the importance of conflicts of interest or potential concerns around them. I still thought it was worth sharing those reactions to the specific takes given in that section of the post.
On our concentration on a couple of existential risks: here I think we disagree. OP works on a wide variety of causes, but I don’t think we should be diversifying more than we are *within* existential risk given our picture of the size and neglectedness of the different risks.
On being in line with the interests of billionaires: I understand the misgivings that people have about EA being so reliant on a small number of funders, and address that point below. And I understand skepticism that funders who have made their wealth in the technology industry have only global impact in mind when they focus their philanthropy on technology issues. For what it’s worth, in the case of the particular billionaires I know best, Cari and Dustin were pretty emotionally reluctant to work on x-risks (and I was as well) - this felt at least to me like a case of them reluctantly concluding that these are important issues rather than coming in with pet causes.
On centralization of funding: I’m having trouble operationalizing the calls for less centralization of funding decision-making, which seems to be the main driver of much of your concerns. I agree that heavy concentration of funding for a given area brings some concerns that would be reduced if the same amount of funding were more spread out among funders; but I haven’t seen an alternative funding mechanism proposed that seems terribly promising.
I was broadly in sync with Dustin’s thoughts here, though not saying I’d endorse every word. I don’t see a good way to define the “members” of EA without keeping a lot of judgment/discretion over what counts (and thus keeping the concentration of power around), or eroding the line between EA and the broader world with its very different priorities. To me, it looks like EA is fundamentally a self-applied label for a bunch of individuals making decisions using an intellectual framework that’s both unusual and highly judgment-laden; I think there are good and bad things about this, but I haven’t seen a way to translate it into more systematic or democratic formal structures without losing those qualities.
I’m not confident here and don’t pretend to have thought it fully through. I remain interested in suggestions for approaches to spending Cari’s and Dustin’s capital that could improve how it’s spent—the more specific and mechanistic, the better.
What’s best for spending Cari and Dustin’s financial capital may not be what’s best for the human community made up of EAs. One could even argue that the human capital in the EA community is roughly on par with or even exceeds the value of Good Ventures’ capital. Just something to think about.
Do you have specific concerns about how the capital is spent? That is, are you dissatisfied and looking to address concerns that you have or to solve problems that you have identified?
I’m wondering about any overlap between your concerns and the OP’s.
I’d be glad for an answer or just a link to something written, if you have time.
Hi Holden, thanks for writing this up, but would it be possible for you to say something with a little bit more substance? At present it seems rather perfunctory and potentially a little insulting.
I’ve attempted to translate the comment above into a series of plain-English bottom-lines.
I apologise if the tone is a little forthright: a trade-off with clarity and intellectual honesty.
On anonymity
“Yeah I can see why there might seem to be a problem, and I promise that I am truly very sorry that you’re facing its consequences. In any case, I promise that everything is actually completely fine and you don’t need to worry! I acknowledge that (as you have already said) my promises don’t count for much here, but… trust me anyway! No, I will not take any notice of the specific issues you describe, nor the specific solutions that you propose.”
On conflicts of interest
“Here I will briefly describe some of the original causes of the problem. I personally think that it’s no big deal, and will not engage with any of the arguments or examples you provide. I promise we’re taking it really seriously, though.”
On focusing on a couple of existential risks (which is a gross simplification of the section I presume you’re responding to?)
“I personally think everything is fine, no I will not engage with any of the arguments or examples you provide.”
On being in line with the interests of billionaires
“I understand your concerns, but most of our tech billionaire donors changed their minds to fit the techno-political culture of Silicon Valley rather than starting off that way, and thus all incentive structures and cultural factors are completely irrelevant.”
On centralization of funding
“I perfunctorily agree that there is a problem, but I’m having trouble operationalizing the operational proposals you made. I will provide no specifics. I think membership-demarcation may be a problem, and will ignore your proposals for solving it.”
“By the way, would you mind doing even more unpaid work to flesh out specific mechanistic proposals, even though I, the person with the power to implement such proposals, just completely ignored them all in the sections I responded to?”
Despite my pre-existing intellectual respect for you, Holden, I really can’t escape reading this as a somewhat-more-socially-competent version of Buck’s response:
“We bosses know what we’re doing, you’re welcome to disagree if you want, but if you want to be listened to you need to do a bunch of unpaid work that we will probably completely ignore, and we most likely won’t listen to you at all anyway.”
This is what power does to your brain: you are only able to countenance posting empty EA-ified PR-speak like this because you are accountable only to a few personal friends that basically agree with you, and can thus get away with more or less ignoring external inputs.
Writing like this really reminds me of the bit about Interpretive Labour in Dead Zones of the Imagination:
Overwhelmingly one-sided social arrangements breed stupidity: by being in a position where you’re powerful enough to ignore people with other points of view, you become extremely bad at understanding them.
Thus, the oblivious bosses (egged on by mixed teams of true sycophants and power/money-seeking yes-men) continue doing whatever they want to do, and an invisible army of exhausted, exasperated, and powerless subordinates scramble to semi-successfully translate the whims of the bosses into bureaucratic justifications for doing the things that actually need to be done.
The bosses can always cook up some justification for why them being in charge is always the best way forward, and either never hear critiques because critics fear for their careers or, as seen here, lazily dismiss them without consequence.
Speaking as someone with a little experience in similar organisations and movements to this one that slowly lost their principles as they calcified into self-serving bureaucracies:
This is what it looks like.
We have warned A.C. Skraeling before about their behavior. “I will rephrase your statement as (insulting thing the person clearly didn’t say)” violates our norms. We are therefore issuing them a one-month ban.
Looping back some months later, FWIW while I disagree with most of the rest of the comment (and can see a case for a ban as a result of them), I quite appreciate the point about “interpretive labor”, and I’ve found it an interesting/useful conceptual handle in my toolkit since reading it.
(This is a high bar as most EA Forum comments do not update me nearly as much).