I’ve known Kat Woods for as long as Eric Chisholm has. I first met Eric several years before either of us first got involved in the EA or rationality communities. I had a phone call with him a few hours ago letting him know that this screencap was up on this forum. He was displeased you didn’t let him know yourself that you started this thread.
He is extremely busy for the rest of the month. He isn’t on the EA Forum either. Otherwise, I don’t speak for Eric. I’ve also made my own reply in the comment thread Eric started on Eliezer’s Facebook post. I’m assuming you’ll see the rest of that comment thread on Facebook too.
You can send me a private message to talk or ask me about whatever, or not, as you please. I don’t know who you are.
For anyone else curious, here is a Google Doc I’ve started writing up about the origins of the EA and rationality groups in Vancouver.
https://docs.google.com/document/d/1p8MPC5j2aZrVX_ugBSHy8-N9HSHWiulR5GHJBfKhQe8/edit?usp=sharing
Evan_Gaensbauer
The question of to what extent more effective altruists should return to earning to give during the last year as the value of companies like Meta and FTX has declined has me pondering whether that’s worthwhile, given how nobody in EA seems to know how to spend well way more money per year on multiple EA causes.
I’ve been meaning to write a post about how there has been a lot of mixed messaging about what to do about AI alignment. There has been an increased urgency to onboard new talent, and launch and expand projects, yet there is an apparently growing consensus that almost everything everyone is doing is either pointless or making things worse. Setbacks the clean meat industry faces have been mounting during the last couple years. There aren’t clear or obvious ways to make significant progress on overcoming those setbacks mainly by throwing more money at them in some way.I’m not as familiar with how much room for more funding before diminishing marginal returns are hit for other priority areas for EA. I expect that other than a few clear-cut cases like maybe some of Givewell’s top recommended charities, there isn’t a strong sense of how to spend well more money per year than a lot of causes are already receiving from the EA community.
It’s one thing for smaller numbers of people returning to give to know the best targets for increased marginal funding that might fall through after the decline of FTX. It seems like it might be shortsighted to send droves of people rushing back into earning to give when there wouldn’t be any consensus for what interventions they should earning to give to.
I have a for a long time thought it would be valuable for RC’s approach to be replicated in many other countries. So I am glad there is now an article like this with which to show effective altruists in other countries what the value of tax-deductible donation portals like RC really is. So there is a good chance I will cite this article in the future if I discuss the topic on the EA Forum again, or elsewhere.
Oskar Schindler: I could have got more out. I could have got more. I don’t know. If I’d just… I could have got more.
Itzhak Stern: Oskar, there are eleven hundred people who are alive because of you. Look at them.
Oskar Schindler: If I’d made more money… I threw away so much money. You have no idea. If I’d just...
Itzhak Stern: There will be generations because of what you did.
Oskar Schindler: I didn’t do enough!
Itzhak Stern: You did so much.
[Schindler looks at his car]
Oskar Schindler: This car. Goeth would have bought this car. Why did I keep the car? Ten people right there. Ten people. Ten more people.
[removing Nazi pin from lapel]
Oskar Schindler: This pin. Two people. This is gold. Two more people. He would have given me two for it, at least one. One more person. A person, Stern. For this.
[sobbing]
Oskar Schindler: I could have gotten one more person… and I didn’t! And I… I didn’t!
I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.
The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.
The second time I cried about it all was today.
Beyond any matter related to Nick Bostrom’s recent apology, my two cents is that the answer is that, generally, no, most of the effective altruism community doesn’t know how to apologize well.
Thank you for sharing this.
It took courage to publish this when it seems this state of affairs has become more trying with every week FLI has tried to show how they’ve been learning and doing better after course-correcting from the initial mistake for months now.
I have one main question with a set of a few related questions, though I understand if you’d only answer the first one.
While the FAQ mentions how Nya Dagbladet is one of few publications critical of decisions to put nuclear weapons near the Swedish border, it doesn’t directly address why Nya Dagbladet was considered for a grant in the first place.
-
What was the specific work Nya Dagbladet had done or was expected to do that had them be considered for a grant in the first place? Would it just have been to fund Nya Dagbladet to publish more media in favour of nuclear deescalation? Or was it something else?
-
As there are apparently few but still some other publications doing the kind of work in Sweden FLI might have been prospectively interested in funding, what was it specifically about Nya Dagbladet that had FLI consider them more? How much were the following factors:
-
perception that Nya Dagbladet published higher quality or more impactful content?
-
Nya Dagbladet having less funding than other publications and thus being considered a marginally more scalable and neglected publication?
-
the apparent independence of Nya Dagbladet’s journalism, such that an increased independence might have been thought to give them more leeway to criticize government policies that risked increasing government policies?
It appears Nya Dagbladet significantly deceived FLI, such as keeping hidden details about its political affiliations when the grant applicants presumably would have known that that disclosure might have their application rejected. Yet does FLI consider Nya Dagbladet to have 100% lied? I.e., is it thought the publication would have used the money only for what the grant would have permitted, or is it thought it might have totally scammed FLI to use the money to push other propaganda and conspiracy theories?
After finally publishing answers to these questions while still being doubted and having to take care of business as this scandal ends, I don’t expect anyone from FLI to immediately answer these questions. Please take your time to answer them, or just decline to answer them at this time.
-
I look forward to the next posts in this sequence, “It’s Harder to Eliminate Global Poverty Than You’d Think,” and “The Hard Problem of Consciousness Remains Unsolved.”
When I asked about what has caused EA movement growth to slow down, people answered it seemed likeliest EA made the choice to focus on fidelity instead of rapid growth. That is a vague response. What I took it to mean is:
EA as a community, led by EA-aligned organizations, chose to switch from prioritizing rapid growth to fidelity.
That this was a choice means that, presumably, EA could have continued to rapidly grow.
No limiting factor has been identified that is outside the control of the EA community regarding its growth.
EA could make the choice to grow once again.
Given this, nobody has identified a reason why EA can’t just grow again if we so choose as a community by redirecting resources at our disposal. Since the outcome is under our control and subject to change, I think it’s unwarranted to characterize the future as ‘bleak’.
Below is a comment I received an anonymous request to post on behalf of an EA community member regarding these grant payout reports.
When I read the perfunctory grant rationale for the Long-Term Future Fund and Community Fund grants, I wondered whether this was a joke or a calculated insult to the EA community.
The one paragraph and 4 bullet points to justify the disbursement of over $1,000,000 to 5 organisations from across the two funds seems like it could have been written up in 3 minutes, with nothing more than a passing knowledge of some of the most well known EA(ish) orgs. This, coming after months of speculation about what the Grant Evaluator entrusted with EA funds for the long term future and the EA community might actually be doing, gives the impression that they weren’t actually doing anything.
Perhaps what is most disappointing is the desultory explanation that all these funds are disbursed in the vague hope that the >$1million might “subsidiz[e] electronics upgrades or childcare” for the charity’s staff or pay them higher salaries and “increase staff satisfaction” and that this might boost productivity. This seems a clear signal, among other things, that the funding space in this area is totally full and grant manager can’t even manage to come up with plausible sounding explanations for how their funds they are disbursing to EA-insiders might increase impact.
- 1 Aug 2018 5:33 UTC; 23 points) 's comment on The EA Community and Long-Term Future Funds Lack Transparency and Accountability by (
Hi Nick. Thanks for your response. I also appreciate the recent and quick granting of the EA Funds up to date. One thing I don’t understand is why most of the grants you wanted to make could have been made by the Open Philanthropy Project, is why:
the CEA didn’t anticipate this;
gave public descriptions of how the funds you managed would work to the contrary;
and why, if they learned of your intentions contrary to what they first told the EA community, they didn’t issue an update.
I’m not aware of a public update of that kind. If there was a private email list for donors to the EA Community and Long-Term Future Funds, and they were issued a correction to how they were prior informed the money in the funds would be granted, I’d like to know. (I’m not demanding to see that update/correction published, if it exists, as I respect the privacy inherent in that relationship. If any donor to these funds or someone from the CEA could inform me if such an update/correction exists, please let me know.)
Regarding my concerns as you outlined them:
(i) delay between receipt and use of funds, (ii) focus on established grantees over new and emerging grantees, and (iii) limited attention to these funds.
That’s an accurate breakdown.
Based on how the other two EA Funds have provided more frequent updates and made more frequent grants in the last year, I expect a lot of donors or community members would find it unusual the EA Community and Long-Term Future Funds granted all the money all at once. But in April you did give an update to that effect.
However, donors to the EA Community and Long-Term Future Funds were initially given the impression new and emerging grantees would be the target over established grantees. This was an impression of the EA Funds initially given by the CEA, not yourself as fund manager. But the CEA itself never corrected that. While based on the updates donors could have surmised the plan had changed, I would have expected a clearer update. Again, if such was privately provided to donors to these funds in some form, that would be good to know. Also, based on the redundancy of the EA Funds as you intended to manage them regarding your other role as a program officer at Open Phil, it seems clear you didn’t expect you’d have to pay much attention to either of these funds.
However, it appears again donors were given the different impression by the CEA more attention would be afforded to the EA Funds. Had donors been given the rationale for why there were less frequent updates from the two funds you’ve been managing earlier, that would have been better. To receive updates on what amount of attention the EA Funds would receive was a suggestion on how to improve the EA Funds from Henry Stanley’s last EA Forum post on the subject.
That’s great news about BERI. I haven’t had a chance to look over everything BERI has done up to date, but based on their early stuff I’ve looked at and the people involved, that sounds promising. Unfortunately, information on the EA Grants has been scarce. I know others have asked me about the EA Grants, and I’ve seen others share concerns regarding the uncertainty of when public applications will open again.
It appears at least there was a communication breakdown from the CEA initially and publicly told the EA community (which I imagine would include most of those who became donors to the funds), and, at a later stage, how you intended to manage them. Regarding this, and:
further questions regarding the EA Grants;
the possibility of (an) additional fund manager(s); I will try following up with the Centre for Effective Altruism more directly. I can’t think of anything else I have to ask you at this time, so thanks for taking the time to respond and provide updates regarding the EA Funds.
There is a subsection of animal advocates in effective altruism who are concerned that a far future in which anti-speciesism isn’t prevalent isn’t a worthy future. If that’s a confusing wording, let Brian Tomasik, a thought leader in such circles, to explain. From “Risks From Astronomical Future Suffering” [emphasis mine]:
It’s far from clear that human values will shape an Earth-based space-colonization wave, but even if they do, it seems more likely that space colonization will increase total suffering rather than decrease it. That said, other people care a lot about humanity’s survival and spread into the cosmos, so I think suffering reducers should let others pursue their spacefaring dreams in exchange for stronger safety measures against future suffering. In general, I encourage people to focus on making an intergalactic future more humane if it happens rather than making sure there will be an intergalactic future.
Thus, whatever charity can most effectively spread anti-speciesism, or other value-systems for reducing suffering, would be considered the most effective charity overall. If the most effective way to spread the anti-speciesist meme is an ACE-recommended charity spreading an animal-free diet, it is, by some lights, legitimately the most effective charity for far-future considerations. Again, to clarify, this wouldn’t be perceived as a rationalization of convergence on specious grounds. It follows from a simple chain of reasoning that getting as many of the humans as possible who will steer the far future to care about non-human suffering, so they’ll be inclined to prevent it rather than let it happen.
I just wanted to clarify there are some animal advocates in this community who believe promoting animal welfare is the best for the far future, not because it ensures human survival, but because it prevents the worst outcomes of human survival. This is different from the example provided above. One might consider this a niche position in effective altruism. On the other hand, multiple organizations out of Basel, Switzerland, supported by the Effective Altruism Foundation, are heavily influenced by the work of Brian Tomasik and allied colleagues. I don’t know good data on how prevalent these concerns are, across all of effective altruism, or specifically among those who prioritize animal welfare/anti-specieism.
I’ve had a half-finished draft post about how effective altruists shouldn’t be so hostile to newcomers to EA from outside the English-speaking world (e.g., primarily the United States and Commonwealth countries). In addition to English not being a first language, especially for younger people or students who don’t have as much experience, there are the problems of mastering the technical language for a particular field, as well as the jargon unique to EA. That can be hard for even many native English speakers.
LessWrong and the rationality community are distinct from EA, and even AI safety has grown much bigger than the rationality community. There shouldn’t be any default expectation posters on the EA Forum will conform to the communication style of rationalists. If rationalists expect that because they consider their communication norms superior, the least they should do is make more effort to educate or others how to get up to speed, like with style guides, etc. Some rationalists have done that, though rationalists at large aren’t entitled to expect others will do all the work by themselves without help to write just like they do.
A lot of this is the private sensitivity many community members feel about publicly criticizing the Open Philanthropy Project. I’d chalk it up to the relative power Open Phil wields having complicated impacts on all our thinking on this subject, since with how little the EA community comments on it, the lack of public feedback Open Phil receives seems out of sync with the idea they are the sort of organization that would welcome it. Another thing is the quality of criticism and defense of grantmaking decisions on both sides is quite low. It seems to me EA has overgeneralized its conflict avoidance to exclude scenarios when adversarial debate or communication is fruitful for a community overall, and so when adversarial debate is instrumental, EA is poor at it to the point it doesn’t recognize good debate.
A pattern I’ve seen is for critics of something in EA will parse disagreement with some aspect(s) of their criticism as a wholesale political rejection of everything they’re saying, or taking it as a personal attack on them on retaliation for attacking a shibboleth of EA. These reactions are usually patently false, but this hasn’t stopped EA from garnering a reputation of being hypocritically closed to criticism, and impossible to affect change in.
While I wouldn’t say I generally agree with all of Open Phil’s grants, and simply by chance most EAs or other people wouldn’t because they’re are so many, the impression I’ve gotten is that the EA community and Good Ventures don’t have identical priorities. EA is primarily concerned with global poverty alleviation, AI alignment, and animal welfare. An example of something Open Phil or Good Ventures prioritizes more than EA is criminal justice reform. EA agrees criminal justice reform is one of the more promising areas in public policy to do good, it’s not literally one of EA’s top priorities. So, criminal justice reform is a top priority more particular to Dustin Moskowitz and Cari Tuna.
My impression is that as long as motivations in Open Phil’s grantmaking don’t pull away from effectiveness and other EA values in the cause areas the community cares most about, they don’t mind as much what Open Phil does. A good example of when the EA community is willing to strongly criticize Open Phil when ineffective grantmaking infringes on a cause area EA is more passionate about is the criticism Open Phil received from multiple points over how they made their grant to OpenAI.
Thanks for articulating arguments for this. There is a strong bias in favour of growth of various kinds in EA. There is an elementary growth strategy of naively pursuing growth as fast as possible. I also know several community members who are opposed to growing the movement much at all, as opposed to doing so carefully. However, hardly any effective altruists opposed to different kinds of movement growth lay out their arguments against them. This frustrates me as I’m genuinely curious to separate the good and bad arguments against rapid movement growth in EA, and that they’re not publicly written out like this makes that difficult.
Giving arguments for how to do movement growth to allow for nuanced discussion, rather than if we should do much movement growth at all, is very helpful.
Rough estimate: if ~60% of Open Phil grantmaking decisioning is attributable to Holden, then 47.2% of all EA capital allocation, or $157.4M, was decided by one individual in 2017. 2018 & 2019 will probably have similar proportions.
It seems like EA entered into this regime largely due to historically contingent reasons (Cari & Dustin developing a close relationship with Holden, then outsourcing a lot of their philanthropic decision-making to him & the Open Phil staff).
This estimate seems to have a lot of problems with it, from attributing so much credit to Holden that way not only seeming in practice treating him as too unique an actor without substantiation, if not in principle playing fast and loose with how causation works, while being a wild overestimate from nowhere. This is a pretty jarring turn to what up until that point I had been reading as a reasonable question post.
I made a separate comment for my thoughts on worst-case scenarios, because I have a lot to say on the subject.
I imagine the worst-case scenario is something like an advisor giving radically bad career advice to numerous advisees based on idiosyncratic priorities or beliefs about their own field, and then advisees waste significant amounts of their own time or money acting on that feedback, when they could have easily spent those same resources better. Of course that already happens in EA. So there already isn’t enough quality control in EA for this kind of thing. That isn’t to say I shouldn’t try to ensure greater quality control in my own project, but it’s important to know the pre-existing context in EA.
I should say one reason I haven’t thought about worst-case scenarios you’ve brought up so far is because I’ve taken for granted they’re unlikely to occur. It seems obvious to me people would tend to act in good faith if they were bothering to participate in this network, but even if they were to act in bad faith, anyone saying anything like your suggestions would disqualify them (in my eyes, at least) from participating as an advisor, for the simple reason none of those things have anything to do with careers.
If I include a survey, I definitely should include a feedback survey. I have intended to talk to 80,000 Hours to ask them questions about how they set up career coaching, and that will inform how I develop this network too. in the feedback survey for advisees I’ll include a question about whether their coach did anything inappropriate, especially including trying to push the conversation in a direction that had nothing to do with trying to figure out their careers. If a career coach recommended someone donate a kidney, invest in a dubious crypto startup, or try saving the world by taking a bunch of psychedelics, that would get flagged, and they would be removed from the pool of prospective advisors.
At the same time, effective altruists have written blog posts on the EA Forum about how to donate kidneys, or recommend people do so. Getting recruited for weird projects can happen at EA events, including official ones like EAG events. I can definitely ask others how they’ve minimized the risk of strange things happening. Yet all throughout the small risk of these averse experiences persist. I know the point you were making wasn’t about these specific examples, but my point is already in EA there is a small risk of things like this happening that are hard to eliminate. So I don’t know why someone would single out a career advising network to exploit, and that this is of all things is likelier to produce viral headlines about how bad this is. It just seems so unlikely, I would feel strange introducing a quality control measure like having advisors click a box or sign a digital form saying they were aware they were only doing career advising, and not scamming advisees or something.
Again, I will include a quality feedback survey, so anything like this should get caught.
I do take seriously concerns of possible sexual harassment. It also seems strange to me that is as likely to happen over an online session, but I will ask other EA groups if there is anything I should do to minimize these kinds of risks in the advising network. That would also get including in a quality feedback survey. I’m unsure if I should include a separate ask about sexual harassment. This is something I will definitely think a lot more before I set up any in-person advising sessions. In general, it seems like there’s a lot more risk with in-person advising sessions, so I will take longer to develop quality control measures before I set those up. By count, at most 18⁄71 possible pairings I could make now would result in in-person advising sessions. Chances are the number of in-person sessions it would make sense to set up at this point would be even lower still.
What do you mean by ‘expert team’ in this regard? In particular, if you consider yourself or the other fund managers to be experts, would you being willing to qualify or operationalize that expertise?
I ask because when the EA Fund management teams were first announced, there was a question about why there weren’t ‘experts’ in the traditional sense on the team, i.e., what makes you think you’d be as good as managing the Long-Term Future Fund as a Ph.D. in AI, biosecurity, or nuclear security (assuming when we talk about ‘long-term future’ we mostly in practice mean ‘existential risk reduction’)?
I ask because when the new EA Funds management teams were announced, someone asked the same question, and I couldn’t think of a very good answer. So I figure it’d be best to get the answer from you, in case it gets asked of any us again, which seems likely?
I’ve received feedback from multiple points in the community the EA Funds haven’t been as responsive in as timely or as professional a manner as some would prefer. It appears a factor for this is that the fund managers are all program officers at the Open Philanthropy Project, which is a job which from the fund managers’ perspective is most of the time more crucial than anything that can be done with the EA Funds. Thus, doing a more than full-time work-equivalent(?… I don’t know how much Open Phil staff work each week) may mean management of the EA Funds gets overlooked. Ben West also made a recent post in the ‘Effective Altruism’ Facebook group asking about the EA Funds, and the response from the Centre for Effective Altruism (CEA) was they hadn’t had a chance to update the EA Funds webpage with data on what grants had been made in recent months.
Given that at the current level of funding, the EA Funds aren’t being mismanaged, but rather are being more neglected than donors and effective altruists would like, I’d say it might already be time to assign more managers to the fund. Picking Open Phil program officers to run the funds was the best bet for the community to begin with, as they had the best reputation for acumen going in, but if in practice in turns out Nick, Elie and Lewis only have enough time to manage grants at Open Phil (most of the time), it’s only fair to donors CEA assign more fund managers to the fund. What’s more, I wouldn’t want the attention of Open Phil program officers to be any more divided than it need be, as I consider their work more important than the management of the EA Funds as is.
If the apparent lack of community engagement regarding the EA Funds is on the part of the CEA team responsible to keep the webpage updated, as their time may also be divided and dedicated to more important CEA projects than the EA Funds at any given point in time, that needs to be addressed. I understand the pressures of affording enough money to project management it gets done very effectively, while as an effective non-profit not wanting to let overhead expand too much and result in inefficient uses of donor money. I think if that’s the case for CEA staff dividing their time between EA Funds and more active projects, it’d be appropriate for the CEA to hire a dedicated communications manager for the the EA Funds overall, and/or someone who will update the webpage with greater frequency. This could probably be done at 1 full-time equivalent additional staff hire or less. If it’s not a single new position at the CEA, a part-time equivalent CEA staffer could have their responsibilities extended to ensuring there’s a direct channel between the EA Funds and the EA community.
In the scope of things, such as the money moved through EA overall, EA Funds management may seem a minor issue. Given it’s impact on values integral to EA, like transparency and accountability, as well as ensuring high-trust engagement between EA donors and EA organizations, options like I’ve listed out above seem important to implement. If not, overall, I’d think there’s greater need for adding external oversight to ensure anything is being done with the EA Funds.
Summary: This is the most substantial round of grant recommendations from the EA Long-Term Future Fund to date, so it is a good opportunity to evaluate the performance of the Fund after changes to its management structure in the last year. I am measuring the performance of the EA Funds on the basis of what I am calling ‘counterfactually unique’ grant recommendations. I.e., grant recommendations that, without the Long-Term Future Fund, individual donors nor larger grantmakers like the Open Philanthropy Project would have identified or funded.
Based on that measure, 20 of 23, or 87%, grant recommendations, worth $673,150 of $923,150, or ~73% of the money to be disbursed, are counterfactually unique. Having read all the comments, multiple concerns with a few specific grants came up, based on uncertainty or controversy in the estimation of value of these grant recommendations. Even if we exclude those grants from the estimate of counterfactually unique grant recommendations to make a ‘conservative’ estimate, 16 of 23, or 69.5%, of grants, worth $535,150 of $923,150, or ~58%, of the money to be disbursed, are counterfactually unique and fit into a more conservative, risk-averse approach that would have ruled out more uncertain or controversial successful grant applicants.
These numbers are an extremely significant improvement in the quality and quantity of unique opportunities for grantmaking the Long-Term Future Fund has made since a year ago. This grant report generally succeeds at achieving a goal of coordinating donations through the EA Funds to unique recipients who otherwise would have been overlooked for funding by individual donors and larger grantmakers. This report is also the most detailed of its kind, and creates an opportunity to create a detailed assessment of the Long-Term Future Fund’s track record going forward. I hope the other EA Funds emulate and build on this approach.
General Assessment
In his 2018 AI Alignment Literature Review and Charity Comparison, Larks had the following to say about changes in the management structure of the EA Funds.
To clarify, the purpose of the EA Funds has been to allow individual donors relatively smaller than grantmakers like the Open Philanthropy Project (including all donors in EA except other professional, private, non-profit grantmaking organizations) to identify higher-risk grants for projects that are still small enough that they would be missed by an organization like Open Phil. So, for a respective cause area, an EA Fund functions as like an index fund that incentivizes the launch of nascent projects, organizations, and research in the EA community.
Of the $923,150 of grant recommendations made to Centre for Effective Altruism for the EA Long-Term Future Fund this round of grantmaking, all but $250,000 of it went to the kind of projects or organizations that the Open Philanthropy Project tends to make. To clarify, there isn’t a rule or practice of the EA Funds not making those kinds of grant. It’s at the discretion of the fund managers to decide if they should recommend grants at a given time to more typical grant recipients in their cause area, or to newer, smaller, and/or less-established projects/organizations. At the time of this grantmaking round, recommendations to better-established organizations like MIRI, CFAR, and Ought were considered the best proportional use of marginal funds allotted for disbursement at this time.
20 (~87% of total number) grant recommendations totalling $723,150 = ~73%
+ 3 (~13% of total number) grant recommendations totalling $200,00 = ~27%
= 23 grant (in total) recommendations totalling $923,150 = 100%
Since this is the most extensive round of grant recommendations from the Long-Term Future Fund to date with the EA Funds’ new management structure, this is the best apparent opportunity for evaluating the success of the changes made to how the EA Funds are managed. In this round of grantmaking, 87% of the total number of grant recommendations were for efforts of individuals, totalling 73% of the total amount of money that would be disbursed for these grants, that would otherwise have been missed by individual donors, or larger grantmaking bodies.
In other words, the Long-Term Future (LTF) Fund is directly responsible for 87% of 23 grant recommendations made, totalling 73% of $923.15K worth of unique grants, that, presumably, would not have been counterfactually identified had individual donors not been able to pool and coordinate their donations through the LTF Fund. I keep highlighting these numbers, because they can essentially be thought of as the LTF Funds’ current rate of efficiency in fulfilling the purposes it was set up for.
Criticisms and Conservative Estimates
Above is the estimate for the number of grants, and the amount of donations to the EA Funds, that are counterfactually unique to the EA Funds, and can be thought of how effective the impact of the Long-Term Future Fund in particular is. That is the estimate for the grants donors to the EA Funds very probably could not have identified by themselves. Yet another question is would they opt to donate to the grant recommendations that have been just been made by the LTF fund managers? Part of the basis for the EA Funds thus far is to trust the fund mangers’ individual discretion based on their years of expertise or professional experience working in the respective cause area. My above estimates are based on the assumption all the counterfactually unique grant recommendations the LTF Funds make are indeed effective. We can think of those numbers as a ‘liberal’ estimate.
I’ve at least skimmed or read all 180+ comments on this post thus far, and a few persistent concerns with the grant recommendations have stood out. These were concerns that the evidence basis on which some grant recommendations were made wasn’t sufficient to justify the grant, i.e., they were ‘too risky.’ If we exclude grant recommendations that are subject to multiple, unresolved concerns from the LTF Funds, we can make a ‘conservative’ estimate of the percentage and dollar value of counterfactually unique grant recommendations made by the LTF Fund.
Concerns with 1 grant recommendations worth $28,000 to hand out printed copies of fanfiction HPMoR to international math competition medalists.
Concerns with 2 grant recommendations worth $40,000 for individuals who are not currently pursuing one or more specific, concrete projects, but rather are pursuing independent research or self-development. The concern is the grant is based on the fund manager’s (managers’ ?) personal confidence in the individual, and even explication for the grant recommendations expressed concern with the uncertainty in the value of grants like these.
Concerns that with multiple grants made to similar forecasting-based projects, there would be redundancy, in particular concern with 1 grant recommendation worth $70,000 to forecasting company Metaculus that might be better suited to an investment for equity in a startup rather than a grant from a non-profit foundation.
In total, these are 4 grants worth $138,000 that multiple commenters have raised concerns with on the basis the uncertainty for these grants means the grant recommendations don’t seem justified. To clarify, I am not making an assumption about the value of these grants are. All I would say about these particular grants is they are unconventional, but that insofar as the EA Funds are intended to be a kind of index fund willing to back more experimental efforts, these projects fit within the established expectations of how the EA Funds are to be manged. Reading all the comments, the one helpful, concrete suggestion was for the LTF Fund to follow-up in the future with grant recipients and publish their takeaways from the grants.
Of the 20 recommendations made for unique grant recipients worth $673,150, if we were to exclude these 4 recommendations worth $138,000, that leaves 16 of 23, or 69.5% of total recommendations, worth $535,150 of $923,150, or ~58% worth of the total grant recommendations, uniquely attributable to the EA Funds. Again, those grant recommendations excluded from this ‘conservative’ estimate are ruled out based on the uncertainty or lack of confidence in them from commenters, not necessarily the fund managers themselves. While presumably any of the value of any grant recommendation could be disputed, these are the only grant recipients for which multiple commenters have made raised still-unresolved concerns so far. These grants are still initially being made, so whether the best hopes of the fund managers for the value of each of these grants will be borne out is something to follow-up with in the future.
Conclusion
While these numbers don’t address suggestions for how the management of the Long-Term Future Fund can still be improved, overall I would say these numbers show the Long-Term Future Fund has made extremely significant improvement since last year at achieving a high rate of counterfactually unique grants to more nascent or experimental projects that are typically missed in EA donations. I think with some suggested improvements like hiring some professional clerical assistance with managing the Long-Term Future Fund, the Long-Term Future Fund is employing a successful approach to making unique grants. I hope the other EA Funds try emulating and building on this approach. The EA Funds are still relatively new, and so to measure their track record of success with their grants remains to be done, but this report provides a great foundation to start doing so.