I think youâre slightly missing the point of the âcastleâ critics here.
By default, OpenPhil/âDustin/âOwen/âEV donât need anyoneâs permission for how they spend their money. And it is their money, AFAICT open phil doesnât take small donations. I assume Dustin can advocate for himself here.
One might argue that the castle has such high negative externalities it can be criticized on that front. I havenât seen anything to convince me of that, but itâs a possibility and âright to spend oneâs own moneyâ doesnât override that.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses peopleâs concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, letâs say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and theyâre going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. Youâd probably be shocked and think that this was a terrible decision and that it had no place in EA.
OpenPhil saying âitâs our money, we can do what we wantâ wouldnât hold much water for you, and the same thing I think goes for the Wytham Abbey criticsâwho I think do have a strong initial normative point that ÂŁ15m counterfactually could do a lot of good with the Against Malaria Foundation, or Hellen Keller International.
Like itâs not just a concern about âhigh negative externalitiesâ, many people saw this purchase, along with the lack of convincing explanation (to them), and think that this is just a negative EV purchase, and also negative with externalitiesâand then there was little in explanation forthcoming to change their mind.
I think OpenPhil maybe did this thinking it was a minor part of their general portfolio, without realising the immense power, both explicit and implicit, they have over the EA community, its internal dyanmics, and its external perception. They may not officially be in charge of EA, but by all accounts unofficially it works something like that (along with EVF), and I think that should at least figure into their decision-making somewhere
My guess is that lots of people entered EA with inaccurate expectations, and the volume at which this happens indicates a systemic problem, probably with recruiting. They felt ~promised that EA wasnât the kind of place where people bought fancy castles, or would at least publicly announce theyâd bought a retreat center and justify it with numbers.
open phil at one point did have a commitment to transparency, but they publicly renounced it years ago so thatâs no longer in play.
Is the retreat from transparency true? If there are some references you could provide me for this? I also feel like thereâs a bit of âtake-it-or-leave-itâ implicit belief/âattitude from OpenPhil here if true which I think is unfortunate and, honestly, counterproductive.
I would like to see recruiting get more accurate about what to expect within EA, but Iâm not sure what that would look like. I mean I still think that EA ânot being the kind of place where people buy fancy castlesâ is is a reasonable thing to expect and want from EA overall? So Iâm not sure that I disagree that people are entering with these kind of expectations, but Iâm confused about why you think itâs innacurate? Maybe itâs descriptively inaccurate but Iâm a lot less sure that itâs normatively inaccurate?
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldnât have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but havenât seen any that Iâve found convincing.
who I think do have a strong initial normative point that ÂŁ15m counterfactually could do a lot of good with the Against Malaria Foundation, or the Hellen Keller Foundation.
My understanding is:
Open Phil deliberately doesnât fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
Is the retreat from transparency true? If there are some references you could provide me for this?
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesnât mean the money couldnât have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they arenât funding as much as they want to? Iâm pretty sure that used to not be the case, although maybe it has changed. Thereâs also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But Iâm pretty sure at one point they just didnât have anywhere they wanted to give more money to, and I donât know if that has changed.
2023: âWe expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.â
Giving Season 2022: âWeâve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.â
July 2022: âwe donât expect to have enough funding to support all the cost-effective opportunities we find.â Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunitiesââIf other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, weâll ask Open Philanthropy to donate to priorities further down our list. It wonât give less funding overallâitâll just fund the next-highest-priority needs.â
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification âthe community doesnât own its donorâs moneyâ as a defence for a donation/âgrant
On the negative externality point, maybe I didnât make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of âwas it a good donation or notâ.
I take the point about Open Philâs funding gap perhaps being the more fundamental/âimportant issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. Itâs a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. Thereâs a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. Iâd be interested to hear what you think a more realistic ârecruiting pitchâ to EA would look like, but donât feel the need to spell that out if you donât want.
I think a retreat center is a justifiable idea, I donât have enough information to know if Wytham in particular was any good, and⊠I was going to say âI trust open philâ here, but thatâs not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesnât.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. Theyâre of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
Iâd be interested to hear what you think a more realistic ârecruiting pitchâ to EA would look like, but donât feel the need to spell that out if you donât want.
This is the million dollar question. I donât feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so itâs no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singerâs The Most Good You Can Do, MacAskillâs Doing Good Better, and Nick Cooneyâs How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskillâs book felt like he had all the answers and was giving the reader instructions, and Singerâs had the same issues. I wish EA recruiting looked more like Cooneyâs book and less like MacAskillâs.
Thatâs a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and itâs been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
Iâm interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I donât think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and theyâre skill-building while they do it. I feel like CE gets that real work takes time, and Iâd like to see that attitude spread.
@Caleb Parikh has talked about how he grades people coming from âgoodâ EA groups more harshly, because theyâre more likely to have been socially pressured into âcorrectâ views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. Iâd love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wythamâs website, and discovered they list six staff. Even if those people arenât all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK theyâre not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open philâs problem to deal with, but it seemed good to give an update.
âI think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so itâs no hardship for me to say we should remove that from recruiting. ââIâm interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, itâs based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is âdrowning child in a shallow pondâ, and the shallow pond smuggles in a bunch of assumptions that arenât true for global health and poverty. Such as
âwe know what to doâ, âwe know how to implement itâ, and âthe downside is known and finiteâ, which just donât hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow havenât been fully funded, the average personâs ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be âdrowning child in a fast moving river when you donât know how to swimâ.
I think Peter Singer believes this so heâs not being inconsistent, I just think heâs wrong.
âyou can fix this with a single action, after which you are done.â Solving poverty for even a single child is a marathon.
âyou are the only person who can solve thisâ. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where thatâs true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying âsorry actively drowning children, I can more good by putting up this fence and preventing future deathsâ.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made itâs typically towards people who care very little about clothes. What was âyouâd make a substantial sacrifice if a childâs life was on the lineâ has become âyou arenât so petty as to care about your $30 fast fashion shoes, right?â. Just switching the analogy to âruining your cell phoneâ would get more of the original intent.
Do people still care about drowning child analogy? Is it still used in recruiting? Iâd feel kind of dumb railing against a point no one actually believed in.
I will say I also never use the Drowning Child argument. For several reasons:
I generally donât think negative emotions like shame and guilt are a good first impression/âinitial reason to join EA. People tend to distance themselves from sources of guilt. Itâs fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/âhypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally âdoing good, betterâ in the sense of getting people to engage more thoughtfully with values they already hold.
I think youâre slightly missing the point of the âcastleâ critics here.
Technically this is obviously true. And it was the main point behind one of the most popular responses to FTX and all the following drama. But I think that point and the post misses peopleâs concerns completely and comes off as quite tone-deaf.
To pick an (absolutely contrived) example, letâs say OpenPhil suddenly says it now believes that vegan diets are more moral and healthier than all other diets, and that B12 supplementation increases x-risk, and theyâre going to funnel billions of dollars into this venture to persuade people to go Vegan and to drone-strike any factories producing B12. Youâd probably be shocked and think that this was a terrible decision and that it had no place in EA.
OpenPhil saying âitâs our money, we can do what we wantâ wouldnât hold much water for you, and the same thing I think goes for the Wytham Abbey criticsâwho I think do have a strong initial normative point that ÂŁ15m counterfactually could do a lot of good with the Against Malaria Foundation, or Hellen Keller International.
Like itâs not just a concern about âhigh negative externalitiesâ, many people saw this purchase, along with the lack of convincing explanation (to them), and think that this is just a negative EV purchase, and also negative with externalitiesâand then there was little in explanation forthcoming to change their mind.
I think OpenPhil maybe did this thinking it was a minor part of their general portfolio, without realising the immense power, both explicit and implicit, they have over the EA community, its internal dyanmics, and its external perception. They may not officially be in charge of EA, but by all accounts unofficially it works something like that (along with EVF), and I think that should at least figure into their decision-making somewhere
Is the retreat from transparency true? If there are some references you could provide me for this? I also feel like thereâs a bit of âtake-it-or-leave-itâ implicit belief/âattitude from OpenPhil here if true which I think is unfortunate and, honestly, counterproductive.
I would like to see recruiting get more accurate about what to expect within EA, but Iâm not sure what that would look like. I mean I still think that EA ânot being the kind of place where people buy fancy castlesâ is is a reasonable thing to expect and want from EA overall? So Iâm not sure that I disagree that people are entering with these kind of expectations, but Iâm confused about why you think itâs innacurate? Maybe itâs descriptively inaccurate but Iâm a lot less sure that itâs normatively inaccurate?
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldnât have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but havenât seen any that Iâve found convincing.
My understanding is:
Open Phil deliberately doesnât fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
https://ââwww.openphilanthropy.org/ââresearch/ââupdate-on-how-were-thinking-about-openness-and-information-sharing/ââ
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesnât mean the money couldnât have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they arenât funding as much as they want to? Iâm pretty sure that used to not be the case, although maybe it has changed. Thereâs also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But Iâm pretty sure at one point they just didnât have anywhere they wanted to give more money to, and I donât know if that has changed.
2023: âWe expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.â
Giving Season 2022: âWeâve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.â
July 2022: âwe donât expect to have enough funding to support all the cost-effective opportunities we find.â Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunitiesââIf other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, weâll ask Open Philanthropy to donate to priorities further down our list. It wonât give less funding overallâitâll just fund the next-highest-priority needs.â
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification âthe community doesnât own its donorâs moneyâ as a defence for a donation/âgrant
On the negative externality point, maybe I didnât make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of âwas it a good donation or notâ.
I take the point about Open Philâs funding gap perhaps being the more fundamental/âimportant issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. Itâs a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. Thereâs a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. Iâd be interested to hear what you think a more realistic ârecruiting pitchâ to EA would look like, but donât feel the need to spell that out if you donât want.
I think a retreat center is a justifiable idea, I donât have enough information to know if Wytham in particular was any good, and⊠I was going to say âI trust open philâ here, but thatâs not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesnât.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. Theyâre of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
This is the million dollar question. I donât feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so itâs no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singerâs The Most Good You Can Do, MacAskillâs Doing Good Better, and Nick Cooneyâs How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskillâs book felt like he had all the answers and was giving the reader instructions, and Singerâs had the same issues. I wish EA recruiting looked more like Cooneyâs book and less like MacAskillâs.
Thatâs a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and itâs been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
Iâm interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I donât think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and theyâre skill-building while they do it. I feel like CE gets that real work takes time, and Iâd like to see that attitude spread.
(@Judith, @Joey would love to get your take here)
@Caleb Parikh has talked about how he grades people coming from âgoodâ EA groups more harshly, because theyâre more likely to have been socially pressured into âcorrectâ views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. Iâd love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wythamâs website, and discovered they list six staff. Even if those people arenât all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK theyâre not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open philâs problem to deal with, but it seemed good to give an update.
âI think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so itâs no hardship for me to say we should remove that from recruiting. ââIâm interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, itâs based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is âdrowning child in a shallow pondâ, and the shallow pond smuggles in a bunch of assumptions that arenât true for global health and poverty. Such as
âwe know what to doâ, âwe know how to implement itâ, and âthe downside is known and finiteâ, which just donât hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow havenât been fully funded, the average personâs ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be âdrowning child in a fast moving river when you donât know how to swimâ.
I think Peter Singer believes this so heâs not being inconsistent, I just think heâs wrong.
âyou can fix this with a single action, after which you are done.â Solving poverty for even a single child is a marathon.
âyou are the only person who can solve thisâ. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where thatâs true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying âsorry actively drowning children, I can more good by putting up this fence and preventing future deathsâ.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made itâs typically towards people who care very little about clothes. What was âyouâd make a substantial sacrifice if a childâs life was on the lineâ has become âyou arenât so petty as to care about your $30 fast fashion shoes, right?â. Just switching the analogy to âruining your cell phoneâ would get more of the original intent.
I think this might be a good top level postâwould be keen for you more people to see and discuss this point
Do people still care about drowning child analogy? Is it still used in recruiting? Iâd feel kind of dumb railing against a point no one actually believed in.
Iâm not sure (my active intro cb days were ~2019), but I think it is possibly still in the intro syllabus ? You could add a disclaimer at the top.
I will say I also never use the Drowning Child argument. For several reasons:
I generally donât think negative emotions like shame and guilt are a good first impression/âinitial reason to join EA. People tend to distance themselves from sources of guilt. Itâs fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/âhypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally âdoing good, betterâ in the sense of getting people to engage more thoughtfully with values they already hold.
Thanks thatâs helpful!