Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldn’t have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but haven’t seen any that I’ve found convincing.
who I think do have a strong initial normative point that £15m counterfactually could do a lot of good with the Against Malaria Foundation, or the Hellen Keller Foundation.
My understanding is:
Open Phil deliberately doesn’t fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
Is the retreat from transparency true? If there are some references you could provide me for this?
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesn’t mean the money couldn’t have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they aren’t funding as much as they want to? I’m pretty sure that used to not be the case, although maybe it has changed. There’s also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But I’m pretty sure at one point they just didn’t have anywhere they wanted to give more money to, and I don’t know if that has changed.
2023: “We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.”
Giving Season 2022: “We’ve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.”
July 2022: “we don’t expect to have enough funding to support all the cost-effective opportunities we find.” Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities—“If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs.”
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification “the community doesn’t own its donor’s money” as a defence for a donation/grant
On the negative externality point, maybe I didn’t make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of ‘was it a good donation or not’.
I take the point about Open Phil’s funding gap perhaps being the more fundamental/important issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. It’s a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. There’s a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
I think a retreat center is a justifiable idea, I don’t have enough information to know if Wytham in particular was any good, and… I was going to say “I trust open phil” here, but that’s not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesn’t.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. They’re of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
This is the million dollar question. I don’t feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singer’s The Most Good You Can Do, MacAskill’s Doing Good Better, and Nick Cooney’s How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskill’s book felt like he had all the answers and was giving the reader instructions, and Singer’s had the same issues. I wish EA recruiting looked more like Cooney’s book and less like MacAskill’s.
That’s a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and it’s been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
I’m interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I don’t think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and they’re skill-building while they do it. I feel like CE gets that real work takes time, and I’d like to see that attitude spread.
@Caleb Parikh has talked about how he grades people coming from “good” EA groups more harshly, because they’re more likely to have been socially pressured into “correct” views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. I’d love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wytham’s website, and discovered they list six staff. Even if those people aren’t all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they’re not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil’s problem to deal with, but it seemed good to give an update.
“I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting. ”—I’m interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, it’s based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is “drowning child in a shallow pond”, and the shallow pond smuggles in a bunch of assumptions that aren’t true for global health and poverty. Such as
“we know what to do”, “we know how to implement it”, and “the downside is known and finite”, which just don’t hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven’t been fully funded, the average person’s ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be “drowning child in a fast moving river when you don’t know how to swim”.
I think Peter Singer believes this so he’s not being inconsistent, I just think he’s wrong.
“you can fix this with a single action, after which you are done.” Solving poverty for even a single child is a marathon.
“you are the only person who can solve this”. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where that’s true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying “sorry actively drowning children, I can more good by putting up this fence and preventing future deaths”.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made it’s typically towards people who care very little about clothes. What was “you’d make a substantial sacrifice if a child’s life was on the line” has become “you aren’t so petty as to care about your $30 fast fashion shoes, right?”. Just switching the analogy to “ruining your cell phone” would get more of the original intent.
Do people still care about drowning child analogy? Is it still used in recruiting? I’d feel kind of dumb railing against a point no one actually believed in.
I will say I also never use the Drowning Child argument. For several reasons:
I generally don’t think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It’s fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally “doing good, better” in the sense of getting people to engage more thoughtfully with values they already hold.
Bombing B12 factories has negative externalities and is well covered by that clause. You could make it something less inflammatory, like funding anti-B12 pamphlets, and there would still be an obvious argument that this was harmful. Open Phil might disagree, and I wouldn’t have any way to compel them, but I would view the criticism as having standing due to the negative externalities. I welcome arguments the retreat center has negative externalities, but haven’t seen any that I’ve found convincing.
My understanding is:
Open Phil deliberately doesn’t fill the full funding gap of poverty and health-focused charities.
While they have set a burn rate and are currently constrained by it, that burn rate was chosen to preserve money for future opportunities they think will be more valuable. If they really wanted to do both AMF and the castle, they absolutely could.
Given that, I think the castle is a red herring. If people want to be angry about open phil not filling the full funding gaps when it is able I think you can make a case for that, but the castle is irrelevant in the face of its many-billion dollar endowment.
https://www.openphilanthropy.org/research/update-on-how-were-thinking-about-openness-and-information-sharing/
Even assuming OP was already at its self-imposed cap for AMF and HKI, it could have asked GiveWell for a one-off recommendation. The practice of not wanting to fill 100% of a funding gap doesn’t mean the money couldn’t have been used profitably elsewhere in a similar organization.
are you sure GW has charities that meet their bar that they aren’t funding as much as they want to? I’m pretty sure that used to not be the case, although maybe it has changed. There’s also value to GW behaving predictably, and not wildly varying how much money it gives to particular orgs from year to year.
This might be begging the question, if the bar is raised due to anticipated under funding. But I’m pretty sure at one point they just didn’t have anywhere they wanted to give more money to, and I don’t know if that has changed.
2023: “We expect to find more outstanding giving opportunities than we can fully fund unless our community of supporters substantially increases its giving.”
Giving Season 2022: “We’ve set a goal of raising $600 million in 2022, but our research team has identified $900 million in highly cost-effective funding gaps. That leaves $300 million in funding gaps unfilled.”
July 2022: “we don’t expect to have enough funding to support all the cost-effective opportunities we find.” Reports rolling over some money from 2021, but much less than originally believed.
Giving Season 2021: GiveWell expects to roll over $110MM, but also believes it will find very-high-impact opportunities for those funds in the next year or two.
Giving Season 2020: No suggestion that GW will run out of good opportunities—“If other donors fully meet the highest-priority needs we see today before Open Philanthropy makes its January grants, we’ll ask Open Philanthropy to donate to priorities further down our list. It won’t give less funding overall—it’ll just fund the next-highest-priority needs.”
Thanks for the response Elizabeth, and the link as well, I appreciate it.
On the B12 bombing example, it was deliberately provocative to show that, in extremis, there are limits to how convincing one would find the justification “the community doesn’t own its donor’s money” as a defence for a donation/grant
On the negative externality point, maybe I didn’t make my point that clear. I think a lot of critics I think are not just concerned about the externalities, but the actual donation itself, especially the opportunity cost of the purchase. I think perhaps you simply disagree with castle critics on the object level of ‘was it a good donation or not’.
I take the point about Open Phil’s funding gap perhaps being the more fundamental/important issue. This might be another case of decontextualising vs contextualising norms leading to difficult community discussions. It’s a good point and I might spend some time investigating that more.
I still think, in terms of expectations, the new EA joiners have a point. There’s a big prima facie tension between the drowning child thought experiment and the Wytham Abbey purchase. I’d be interested to hear what you think a more realistic ‘recruiting pitch’ to EA would look like, but don’t feel the need to spell that out if you don’t want.
I think a retreat center is a justifiable idea, I don’t have enough information to know if Wytham in particular was any good, and… I was going to say “I trust open phil” here, but that’s not quite right, I think open phil makes many bad calls. I think a world where open phil gets to trust its own judgement on decisions with this level of negative externality is better than one where it doesn’t.
I understand other people are concerned about the donation itself, not just the externalities. I am arguing that they are not entitled to have open phil make decisions they like, and the way some of them talk about Wytham only makes sense to me if they feel entitlement around this. They’re of course free to voice their disagreement, but I wish we had clarity on what they were entitled to.
This is the million dollar question. I don’t feel like I have an answer, but I can at least give some thoughts.
I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting.
Back in 2015 three different EA books came out- Singer’s The Most Good You Can Do, MacAskill’s Doing Good Better, and Nick Cooney’s How To Be Great At Doing Good. My recollection is that Cooney was the only one who really attempted to transmit epistemic taste and a drive to think things through. MacAskill’s book felt like he had all the answers and was giving the reader instructions, and Singer’s had the same issues. I wish EA recruiting looked more like Cooney’s book and less like MacAskill’s.
That’s a weird sentence because Nick Cooney has a high volume of vague negative statements about him. No one is very specific, but he shows up on a lot of animal activism #metoo type articles. So I want to be really clear this preference is for that book alone, and it’s been 8 years since I read it.
I think the emphasis on doing The Most Possible Good (* and nothing else counts) makes people miserable and less effective. It creates a mix of decision paralysis, excess deference, and pushes people into projects too ambitious for them to learn from, much less succeed at.
I’m interested in what Charity Entrepreneurship thinks we should do. They consistently incubate the kind of small, gritty projects I think make up the substrate of a healthy ecosystem. TBH I don’t think any of their cause areas are as impactful as x-risk, but succeeding at them is better than failing to influence x-risk, and they’re skill-building while they do it. I feel like CE gets that real work takes time, and I’d like to see that attitude spread.
(@Judith, @Joey would love to get your take here)
@Caleb Parikh has talked about how he grades people coming from “good” EA groups more harshly, because they’re more likely to have been socially pressured into “correct” views. That seems like a pretty bad state of affairs.
I think my EA group (seattle, 2014) handled this fantastically, there was a lot of arguing with each other and with EA doctrine. I’d love to see more things look like that. But that was made up heavily of adult rationalists with programming jobs, not college students.
Addendum: I just checked out Wytham’s website, and discovered they list six staff. Even if those people aren’t all full-time, several of them supervise teams of contractors. This greatly ups the amount of value the castle would need to provide to be worth the cost. AFAIK they’re not overstaffed relative to other venues, but you need higher utilization to break even.
Additionally, the founder (Owen Cotton-Barrat) has stepped back for reasons that seem merited (history of sexual harassment), but a nice aspect of having someone important and busy in charge was that he had a lot less to lose if it was shut down. The castle seems more likely to be self-perpetuating when the decisions are made by people with fewer outside options.
I still view this as fundamentally open phil’s problem to deal with, but it seemed good to give an update.
“I think the drowning child analogy is deceitful, manipulative, and anti-epistemic, so it’s no hardship for me to say we should remove that from recruiting. ”—I’m interested in why you think this?
It puts you in a high SNS activation state, which is inimical to the kind of nuanced math good EA requires
As Minh says, it’s based in avoidance of shame and guilt, which also make people worse at nuanced math.
The full parable is “drowning child in a shallow pond”, and the shallow pond smuggles in a bunch of assumptions that aren’t true for global health and poverty. Such as
“we know what to do”, “we know how to implement it”, and “the downside is known and finite”, which just don’t hold for global health and poverty work. Even if you believe sure fire interventions exist and somehow haven’t been fully funded, the average person’s ability to recognize them is dismal, and many options make things actively worse. The urgency of drowningchildgottasavethemnow makes people worse as distinguishing good charities from bad. The more accurate analogy would be “drowning child in a fast moving river when you don’t know how to swim”.
I think Peter Singer believes this so he’s not being inconsistent, I just think he’s wrong.
“you can fix this with a single action, after which you are done.” Solving poverty for even a single child is a marathon.
“you are the only person who can solve this”. I think there is something good about getting people to feel ownership over the problem and avoiding bystandard effect, but falsely invoking an analogy to a situation where that’s true is not the way to do it.
A single drowning child can be fixed via emergency action. A thousand drowning children scattered across my block, replenishing every day, requires a systemic fix. Maybe a fence, or draining the land. And again, the fight or flight mode suitable for saving a single child in a shallow pond is completely inappropriate for figuring out and implementing the systemic solution.
EA is much more about saying “sorry actively drowning children, I can more good by putting up this fence and preventing future deaths”.
When Singer first made the analogy clothes were much more expensive than they are now, and when I see the argument being made it’s typically towards people who care very little about clothes. What was “you’d make a substantial sacrifice if a child’s life was on the line” has become “you aren’t so petty as to care about your $30 fast fashion shoes, right?”. Just switching the analogy to “ruining your cell phone” would get more of the original intent.
I think this might be a good top level post—would be keen for you more people to see and discuss this point
Do people still care about drowning child analogy? Is it still used in recruiting? I’d feel kind of dumb railing against a point no one actually believed in.
I’m not sure (my active intro cb days were ~2019), but I think it is possibly still in the intro syllabus ? You could add a disclaimer at the top.
I will say I also never use the Drowning Child argument. For several reasons:
I generally don’t think negative emotions like shame and guilt are a good first impression/initial reason to join EA. People tend to distance themselves from sources of guilt. It’s fine to mention the drowning child argument maybe 10-20 minutes in, but I prefer to lead with positive associations.
I prefer to minimise use of thought experiments/hypotheticals in intros, and prefer to use examples relatable to the other person. IMO, thought experiments make the ethical stakes seem too trivial and distant.
What I often do is to figure out what cause areas the other person might relate to based on what they already care about, describe EA as fundamentally “doing good, better” in the sense of getting people to engage more thoughtfully with values they already hold.
Thanks that’s helpful!