FWIW, I’m not sure if you found it already, but I think this is the best piece I’ve seen written so far on the overlaps and differences between EA and SJ worldviews: What Makes Outreach to Progressives Hard
IanDavidMoss
One context note that doesn’t seem to be reflected here is that in 2014, there was a lot of optimism for a bipartisan political compromise on criminal justice reform in the US. The Koch network of charities and advocacy groups had, to some people’s surprise, begun advocating for it in their conservative-libertarian circles, which in turn motivated Republican participation in negotiations on the hill. My recollection is that Open Phil’s bet on criminal justice reform funding was not just a “bet on Chloe,” but also a bet on tractability: i.e., that a relatively cheap investment could yield a big win on policy because the political conditions were such that only a small nudge might be needed. This seems to have been an important miscalculation in retrospect, as (unless I missed something) a limited-scope compromise bill took until the end of 2018 to get passed.
I’m not aware of any significant other criminal justice legislation that has passed in that time period.[Edit: while this is true at the national level, arguably there has been a lot of progress on CJR at state and local levels since 2014, much of which could probably be traced back to advocacy by groups like those Open Phil funded.]This information strongly supports the “Leverage Hypothesis,” which was cited by Open Phil staff themselves, so I think it ought to be weighted pretty strongly in your updates.
The top-voted suggestion in FTX’s call for megaproject ideas was to evaluate the impacts of FTX’s own (and other EA) grantmaking. It’s hard to conduct such an evaluation without, at some point, doing the kind of analysis Jack is calling for. I don’t have a strong opinion about whether it’s better for FTX to hire in-house staff to do this analysis or have it be conducted externally (I think either is defensible), but either way, there’s a strong demonstrated demand for it and it’s hard to see how it happens without EA dollars being deployed to make it possible. So I don’t think it’s unreasonable at all for Jack to make this suggestion, even if it could have been worded a bit more politely.
I agree. While I appreciate the push to lower the barriers to posting for those who feel intimidated, the flipside of this is that it’s pretty demotivating when a post that reflects five months and hundreds of hours of work is on the front page for less than a day. I feel like there’s something wrong with the system when I can spend five minutes putting together a linkpost instead and earn a greater level of engagement.
- 17 Sep 2022 9:18 UTC; 18 points) 's comment on Agree/disagree voting (& other new features September 2022) by (
- 30 Mar 2022 16:57 UTC; 4 points) 's comment on EA Forum feature suggestion thread by (
I wasn’t there at the very beginning, but have followed the effective philanthropy “scene” since 2007 or so. My sense is that most EA community members aren’t very knowledgeable about this whole side of institutional philanthropy, so I was pleasantly surprised to see the history recounted pretty accurately here! With that said, one quibble is that the book you cited entitled Effective Philanthropy by Mary Ellen Capek and Molly Mead is not one I’d ever heard of before reading this post; I think this is just a case of a low-profile resource happening to get good Google search results years later.
Here is a bit of additional background on the key players and some of their intersections, as I understand it:
The effective philanthropy movement was very much a child of the original dot-com boom in the late 1990s. While CEP is based in Boston, the scene was mostly driven by an earlier generation of West Coast tech magnates who were interested in bringing business concepts like results-based management to philanthropy. Education funding was viewed as a major priority and there were close ties to the charter school movement, which saw a number of influential organizations like KIPP incubated by funders looking to put these ideas into practice. With that said, CEP’s Phil Buchanan has consistently pushed back against the idea that nonprofits are analogous to businesses, despite his own MBA from Harvard Business School.
The William and Flora Hewlett Foundation has an Effective Philanthropy Program and has been a major financial supporter of CEP for a long time. Hewlett’s former president Paul Brest (2000-2012) pioneered the notion of “strategic philanthropy” which is closely related both in spirit and sociologically to this movement. Fun trivia note: Hewlett’s Effective Philanthropy program was an early funder of GiveWell at the time when that organization was precariously situated (i.e., pre-Dustin & Cari).
Stanford Social Innovation Review was closely associated with this scene as well. With startup funding from Hewlett, I believe it was intended to be a Harvard Business Review for the social sector when it was founded in 2003. (HBR had published the original article on “venture philanthropy” in 1997.)
Some other funders that have been influential include Mario Marino’s Venture Philanthropy Partners and his Leap of Reason community, the Edna McConnell Clark Foundation, the Robin Hood Foundation, and REDF (which developed the social return on investment methodology, a form of cost-benefit analysis).
Over the past decade, the consensus among US-based staffed foundations has shifted hard against some of the technocratic premises that drove the effective philanthropy movement, in particular its emphasis on measurable outcomes and tendency to invest lots of funder resources in strategy development. The Whitman Institute’s work probably contributed in a minor way to that dynamic, but in my read a much stronger influence has been the growing emphasis on racial justice in the nonprofit sector since the dawn of the Black Lives Matter movement that, via a variety of pathways including the widespread socialization of Tema Okun’s work, caused so-called “top-down” approaches like effective/strategic philanthropy to feel out of touch with the moment. One of the earliest points of tension was a series put out beginning in 2009 by the National Committee for Responsive Philanthropy called “Philanthropy at its Best” critiquing current foundation practices, which Brest wrote a four-part essay responding to in 2011. A parallel thread of critique comes from complexity science, via the argument that the wicked problems philanthropy is trying to solve are knotty enough that trying to predict the outcomes of philanthropic investments with any meaningful level of detail is a fool’s errand, and funders should therefore defer to the expertise of grantees wherever possible. On that front, this essay from one of the co-founders of FSG (a philanthropy consultancy closely associated with Harvard Business School and the early days of venture philanthropy) was particularly influential.
I don’t believe there was one single event that caused the momentum around effective philanthropy to fall apart, but by 2016 or so it was clear that its peak was in the rear-view mirror; a particularly dramatic turn was when Hal Harvey, Paul Brest’s co-author on their 2008 book Money Well Spent which was written while Brest was still president of Hewlett, wrote an op-ed apologizing for his role advancing strategic philanthropy. There’s a much longer conversation to have about to what extent and which of the critiques of effective philanthropy are worth attending to, and how they relate to effective altruism, but I’m happy to see it pointed out that many of the topics EA is most concerned with have been discussed at length in other venues.
I’ve been thinking about this too. I was really struck by the contrast between the high level of explicit support for “one of our own” running for office vs. the usual resistance to political activism or campaigning otherwise. Personally, I’m strongly in favor of good-faith political campaigning on EA grounds, but from my perspective explicit ties to the EA community shouldn’t matter so much in that calculus—rather, what matters is our expectations of what the candidates would do to advance or block EA-aligned priorities, whether the candidates are branded as EA or not.
In 2020 I suggested that it might be a good idea to set up an entity to vet and endorse candidates for office on EA grounds. While I’m sure such an entity would have still supported Carrick in retrospect, I think one benefit of having a resource like this is that it would allow us to identify, support, and develop relationships with other politicians around the US and in the rest of the world who would be really helpful to have in office while not facing some of the disadvantages of being a newcomer/outsider that Carrick faced.
Yes, I strongly agree with this. Almost all money in politics goes to establishing and maintaining narratives about the candidates, but money becomes a problem rather than a help in politics when the supporter and candidate allow the money itself to become the narrative. This is especially true in a Democratic primary.
David, I hate to remind you that EA interventions are supposed to be tractable...
Just noting that in the comments of the original post by Nathan Young that the authors linked to, the top-upvoted suggestion was to offset the gap in nuclear security funding created by the MacArthur Foundation’s exit from the field. I recently had an opportunity to speak to someone who was there at the time of MacArthur’s decision and can share more about that privately, but suffice to say that our community should not treat the foundation’s shift in priorities as a strong signal about the importance or viability of work in this space going forward.
Really appreciate you writing this! Echoing others, I think many of these more self-serving motivations are pretty common in the community. With that said, I think some of these are much more potentially problematic than others, and the list is worth disaggregating on that dimension. For example, your comment about EA helping you not feel so fragile strikes me as prosocial, if anything, and I don’t think anyone would have a problem with someone gaining hope that their own suffering could be reduced from engaging in EA.
The ones that I think are most worrying and worth pushing back on (not just for you, but for all of us in the community) are:
Affiliation with EA aligns me with high-status people and elite institutions, which makes me feel part of something special, important and exclusive (even if it’s not meant to be)
EA is partly an intellectual puzzle, and gives me opportunities to show off and feel like I’m right and other people are wrong / I don’t have to get my hands dirty helping people, yet I can still feel as or more legitimate than someone who is actually on the front line
It is a way to feel morally superior to other people, to craft a moral dominance hierarchy where I am higher than other people
The first one is tricky, as affiliation with high-status people and organizations can be instrumentally quite useful for achieving impact—indeed, in some contexts it’s essential—and for that reason we shouldn’t reject it on principle. And just like I think it’s okay to enjoy money, I think it’s okay to enjoy the feeling of doing something special and important! The danger is in having the status become its own reward, replacing the drive for impact. I feel that this is something we need to be constantly vigilant about, as it’s easy to mistake social signals of importance for actual importance (aka LARPing at impact.)
I grouped the “intellectual puzzle” and “get my hands dirty” items because I see them as two sides of the same coin. In recent years it feels to me that EA has lost touch a bit with its emotional core, which is arguably easier to bring forward in the contexts of animal welfare and global poverty than x-risk (and to the extent there is an emotional core to x-risk, it is mostly one of fear rather than compassion). I personally love solving intellectual puzzles and it’s a big reason why I keep coming back to this community, but it mustn’t come at the expense of the A in EA. I group this with “get my hands dirty” because I think for many of us, hard intellectual puzzles are our bread and butter and actually take less effort/provoke less discomfort than putting ourselves in a position to help people suffering right in front of us. I similarly see this one as a balance to strike.
The last one is the only one that I think is just unambiguously bad. Not only is it incorrect on its face, or at least at odds with what I see as EA’s core values, but it is a surefire way to turn off people who might otherwise be motivated to help. And indeed there has been a history of people in EA publicly communicating in a way that came across to others as morally arrogant, especially in early years of the movement, which created rifts with mainstream nonprofit/social sector practice that are still there today (e.g.).
Just wanted to say that I found this update really impressive and the case for (and against) impact clearly presented. Well done!
I strongly agree with Sam on the first point regarding downside risks. My view, based on a range of separate but similar interactions with EA funders, is that they tend to overrate the risks of accidental harm [1] from policy projects, and especially so for more entrepreneurial, early-stage efforts.
To back this up a bit, let’s take a closer look at the risk factors Asya cited in the comment above.
Pushing policies that are harmful. In any institutional context where policy decisions matter, there is a huge ecosystem of existing players, ranging from industry lobbyists to funders to media outlets to think tanks to agency staff to policymakers themselves, who are also trying to influence the outcomes of legislation/regulation/etc. in their preferred direction. As a result, making policies actually become reality is inherently quite hard and almost impossible to make happen without achieving buy-in from a diverse range of stakeholders. While that process can be frustrating and often results in watering down really good ideas to something less inspiring, it is actually quite good for mitigating the downside risks from bad policies! It’s understandable to think of such a volatile mix of influences as scary and something to be avoided, but we should also consider the possibility that it is a productive way to stress-test ideas coming out of EA/longtermist communities by exposing them to audiences with different interests and perspectives. After all, these interests at least in part do reflect the landscape of competing motivations and goals in the public more generally, and thus are often relevant for whether a policy idea will be successful or not.
Making key issues partisan. My view is that this is much more likely to happen by way of involvement in electoral politics than traditional policy-advocacy work. Importantly, though, we just had a high-profile test of this idea in the form of Carrick Flynn’s bid for Congress. By the logic of EA grantmakers worried about partisan politicization, my sense is that the Flynn campaign is one of the riskiest things this community has ever taken on (and remember, we only saw the primary—if he had won and run in the general, many Republican politicians’ and campaign strategists’ first exposure to EA and longtermism would have been by way of seeing a Democrat supported by two of the largest Democratic donors running on EA themes in a competitive race against one of their own.) And yet as it turned out, it did not result in longtermism being politicized basically at all. So while the jury is still out, perhaps a reasonable working hypothesis based on what we’ve seen thus far is that “try to do good and help people” is just not a very polarizing POV for most people, and therefore we should stress out about it a little less.
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with. I think this one is pretty easily avoided. If you have someone leading a policy initiative who is any of those things, they probably aren’t going to make much progress and their work thus won’t cause much harm (other than wasting the grantmaker’s money). Furthermore, the increasing media coverage of longtermism and the fact that longtermism has credible allies in society (multiple billionaires, an increasing number of public intellectuals, etc.) both significantly mitigate against the concern expressed here, as the former factors are much more likely to influence a broad set of policymakers’ opinions and actions.
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project. This seems to be more of a general concern about grantmaking to early-stage organizations and doesn’t strike me as unique to the policy space at all. If anything, it seems to rest on a questionable premise that there is only one channel for communicating with policymakers and only one organization or individual can occupy that channel at a time. As I stated earlier, policymakers already have huge ecosystems of people trying to influence policy outcomes; another entrant into the mix isn’t going to take up much space at all. But also, policymakers themselves are part of a huge bureaucratic apparatus and there are many, many potential levers and points of access that can’t all possibly be covered by a single organization. I do agree that coordination is important and desirable, but we shouldn’t let in itself that be a barrier to policy entrepreneurship, IMHO.
To be clear, I do think these risks are all real and worth thinking about! But to my reasonably well-informed understanding of at least three EA grantmakers’ processes, most of these projects are not judged by way of a sober risk analysis that clearly articulates specific threat models, assigns probabilities to each, and weighs the resulting estimates of harm against a similarly detailed model of the potential benefits. Instead, the risks are assessed on a holistic and qualitative basis, with the result that many things that seem potentially risky are not invested in even if the upside of them working out could really be quite valuable. Furthermore, the risks of not acting are almost never assessed—if you aren’t trying to get the policymaker’s attention tomorrow, who’s going to get their ear instead, and how likely might it be that it’s someone you’d really prefer they didn’t listen to?
While there are always going to be applications that are not worth funding in any grantmaking process, I think when it comes to policy and related work we are too ready to let perfect be the enemy of the good.
- ^
Important to note that the observations here are most relevant to policymaking in Western democracies; the considerations in other contexts are very different.
I expect high karma to cause a post to get read more, if only because of readers’ fear of missing out.
I would have phrased this claim a bit more confidently, as there are systems in place that basically ensure this will be the case, at least on average. For example, higher-karma posts stay on the front page longer and are more likely to be selected for curation in the EA Forum Newsletter, the EA Newsletter, and other off-site amplification channels.
It’s possible there’s a more comprehensive writeup somewhere, but I can offer two data points regarding the removal of $30B in pandemic preparedness funding that was originally part of Biden’s Build Back Better initiative (which ultimately evolved into the Inflation Reduction Act):
I had an opportunity to speak earlier this summer with a former senior official in the Biden administration who was one of the main liaisons between the White House and Congress in 2021 when these negotiations were taking place. According to this person, they couldn’t fight effectively for the pandemic preparedness funding because it was not something that representatives’ constituents were demanding.
During his presentation at EA Global DC a few weeks ago, Gabe Bankman-Fried from Guarding Against Pandemics said that Democratic leaders in Congress had polled Senators and Representatives about their top three issues as Build Back Better was being negotiated in order to get a sense for what could be cut without incurring political backlash. Apparently few to no members named pandemic preparedness as one of their top three. (I’m paraphrasing from memory here, so may have gotten a detail or two wrong.)
The obvious takeaway here is that there wasn’t enough attention to motivating grassroots support for this funding, but to be clear I don’t think that is always the bottleneck—it just seems to have been in this particular case.
I also think it’s true that if the administration had wanted to, it probably could have put a bigger thumb on the scale to pressure Congressional leaders to keep the funding. Which suggests that the pro-preparedness lobby was well-connected enough within the administration to get the funding on the agenda, but not powerful enough to protect it from competing interests.
FWIW, this is from yesterday: https://www.politico.com/news/2022/02/22/africa-asks-covid-vaccine-donation-pause-00010667
“The Africa CDC will ask that all Covid-19 vaccine donations be paused until the third or fourth quarter of this year, the director of the agency told POLITICO.
John Nkengasong, director of the Africa Centres for Disease Control and Prevention, said the primary challenge for vaccinating the continent is no longer supply shortages but logistics challenges and vaccine hesitancy — leading the agency and the African Vaccine Acquisition Trust to seek the delay.”
I don’t have any inside info here, but based on my work with other organizations I think each of your first three hypotheses are plausible, either alone or in combination.
Another consideration I would mention is that it’s just really hard to judge how to interpret advocacy failures over a short time horizon. Given that your first try failed, does that mean the situation is hopeless and you should stop throwing good money after bad? Or does it mean that you meaningfully moved the needle on people’s opinions and the next campaign is now likelier to succeed? It’s not hard for me to imagine that in 2016-17 or so, having seen some intermediate successes that didn’t ultimately result in legislation signed into law, OP staff might have held out genuine hope that victory was still close at hand. Or after the First Step Act was passed in 2018 and signed into law by Trump, maybe they thought they could convert Trump into a more consistent champion on the issue and bring the GOP along with him. Even as late as 2020, when the George Floyd protests broke out, Chloe’s grantmaking recommendations ended up being circulated widely and presumably moved a lot of money; I could imagine there was hope at that time for transformative policy potential. Knowing when to walk away from sustained but not-yet-successful efforts at achieving low-probability, high-impact results, especially when previous attempts have unknown correlations with the probability of future success, is intrinsically a very difficult estimation problem. (Indeed, if someone at QURI could develop a general solution to this, I think that would be a very useful contribution to the discourse!)
In my prior career I worked with a lot of organizations that offered prizes and fellowships to artists, including writers. $100k is on the high side for a prestigious writer’s fellowship, but not absurdly so. I see the amount as being well targeted for an experienced part-time writer who has been blogging on top of a day job or other commitments and wants to make the leap to full-time but doesn’t feel like they have the runway. It feels harder for me to justify giving an award of that amount to a brand-new blogger; the counterfactual impact would have to be extremely clear.
Aside from that, neither Open Phil nor Good Ventures are structured as private foundations (Open Phil is an LLC), so Moskowitz & Tuna aren’t subject to the 5% payout rule anyway.
If anyone’s thinking seriously about doing as Linch suggests and would like to talk about the nuts and bolts of consulting, feel free to get in touch. I’ve been consulting independently for four years and am happy to share what I know/discuss potential collaborations.
Wow, I didn’t see it at the time but this was really well written and documented. I’m sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.