Announcing Effective Altruism Ventures
Today we’re launching Effective Altruism Ventures, a project of CEA’s Effective Altruism Outreach initiative. The goal of Effective Altruism Ventures is to test the theory that we can stimulate the creation of new high impact organizations by simply signaling that funding is available.
GiveWell has argued in multiple blog posts that interesting projects often do not appear until a major funder signals an interest in funding a project. This aligns with my experience running the Technology and Innovation department at the Laura and John Arnold Foundation and with YCombinator’s recently announced Requests for Startups. We designed Effective Altruism Ventures to provide this signal for EA-aligned projects.
For Projects
New projects can apply and go through a systematic evaluation process (the details of which are available here). We will introduce projects that pass the evaluation to our network of individual and institutional funders and help find new funders if needed. We also provide strategic guidance, recruiting help and more. We are both cause-neutral and neutral on organization type (e.g. nonprofit, for-profit, benefit corporation etc.) Applications are rolling, but we devote more time to evaluations at set intervals throughout the year. The next evaluation sprint will be May 1, so interested projects should apply by then.
To get a better sense of our evaluation process and of Effective Altruism Ventures itself, we completed an evaluation of ourselves using the EA Ventures evaluation framework. You can read the evaluation here. The results of our evaluation indicate that there is insufficient evidence to recommend making donations directly to Effective Altruism Ventures at this time. This is consistent with our current plan of running the project on a volunteer basis for 3-6 months before fundraising to support operational costs.
For Funders
For funders Effective Altruism Ventures is a risk-free way of gaining access to higher quality projects. We will learn about your funding priorities and then introduce you to vetted projects that meet your priorities. If you don’t like a project you are free to decline to fund it. We simply ask that you provide us with your reasons so we can improve our evaluation procedure.
We also help improve funder coordination for new projects. This helps funders get a clearer sense of whether their funding is fungible with that of other EA funders.
Want to get involved?
If you’re interested in getting involved in Effective Altruism Ventures, we’re looking for the following:
-
Projects, especially those that are working in areas that are important, tractable and uncrowded.
-
Funders who are ideologically aligned with EA and are interested in seeing our deal flow.
-
Experts in fields of interest that are willing to help us evaluate projects.
-
Entrepreneurs without their own project, but who are interested in working on one of the projects we recommend.
-
Partners who have strong networks and want to work closely with us to evaluation projects, find funders and source new projects.
If you fall into one of the above groups and would like to chat more about Effective Altruism Ventures, feel free to schedule time to chat here or email me at kerry@eaventures.org.
- 10 Dec 2018 22:38 UTC; 11 points) 's comment on Requesting community input on the upcoming EA Projects Platform by (
- EA Ventures Request for Projects + Update by 8 Jun 2015 21:29 UTC; 9 points) (
- 10 Jul 2018 11:29 UTC; 9 points) 's comment on Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations by (
- 22 Apr 2017 22:51 UTC; 6 points) 's comment on Update on Effective Altruism Funds by (
- 2 Mar 2017 18:39 UTC; 2 points) 's comment on What Should the Average EA Do About AI Alignment? by (
Possibly unimportant, but what happened to EA Ventures? I stumbled across this because a paper by Roman V. Yampolskiy notes: “The author is grateful to Elon Musk and the Future of Life Institute and to Jaan Tallinn and Effective Altruism Ventures for partially funding his work on AI Safety.” The EA Ventures site now just redirects to CEA. There’s also a subsequent thread about “EA Ventures Request for Projects + Update.” Did it cease to exist after that? Why?
I’m glad someone is asking what happened with EA Ventures (EAV): it’s an important question that hasn’t yet received a satisfactory answer.
When EAV was discontinued, numerous people asked for a post-mortem of some type (e.g. here, here, and here) to help capture learning opportunities. But nothing formal was ever published. The “Celebrating Failed Projects” panel eventually shared a few lessons, but someone would need to watch an almost hour-long video (much of which does not relate to EAV) to see them all. And the lessons seem trivial (“if you’re doing a project which gives money to people, you need to have that money in your bank account first”) about as often as they seem insightful (“Finding excellent entrepreneurs is much, much harder than I thought it was going to be”).
If a proper post-mortem with community input had been conducted, I’m confident many other lessons would emerge*, including one prominent one: “Don’t over-promise and under-deliver.” This has obvious relevance to a grantmaking project that launched before it had lined up funds to grant (as far as I know EAV only made two grants- the one Jamie mentioned and a $19k grant to EA Policy Analytics). But it also relates to more mundane aspects of EAV: my understanding is that applicants were routinely given overly optimistic expectations about how quickly the process would move.
The missed opportunity to learn these lessons went on to impact other projects. As just one example, EA Grants was described as “the spiritual successor to EA Ventures”. And it did reflect the narrow lesson from that project, as it lined up money before soliciting grant applications. However, the big lesson wasn’t learned and EA Grants consistently overpromised and under-delivered throughout its entire history. EA Grants announced plans to distribute millions of dollars more money than it actually granted, repeatedly announced unrealistic and unmet plans to accept open applications, explicitly described educational grants as eligible when they were not, granted money to a very narrow set of projects, and (despite its public portrayal as a project capable of distributing millions of dollars annually) did not maintain an “appropriate operational infrastructure and processes [resulting] in some grant payments taking longer than expected [which in some cases] contributed to difficult financial or career situations for recipients.”
EAV and EA Grants have both been shuttered, and there’s a new management team in place at CEA. So if I had a sense that the new management had internalized the lessons from these projects, I wouldn’t bring any of this up. But CEA’s recently updated “Mistakes” page doesn’t mention over-promising/under-delivering, which makes me worry that’s not the case. That’s especially troubling because the community has repeatedly highlighted this issue: when CEA synthesized community feedback it had received, the top problem reported was “respondents mentioned several times that CEA ‘overpromised and under delivered’”. The most upvoted comment on that post? It was Peter Hurford describing that specific dynamic as “my key frustration with CEA over the past many years.”
To be fair, the “Mistakes” page discusses problems that are related to over-promising/under-delivering, such as acknowledging that “running too many projects from 2016-present” has been an “underlying problem.” But it’s possible to run too many projects without overpromising, and it’s possible to be narrowly focused on one or a few projects while still overpromising and under-delivering. “Running too many projects” explains why EA Grants had little or no dedicated staff in early 2018; it doesn’t explain why CEA repeatedly committed to scaling the project during that period despite not having the staff in place to execute. I agree CEA has had a problem of running too many projects, but I see the consistent over-promising/under-delivering dynamic as far more problematic. I hope that CEA will increasingly recognize and incorporate this recurring feedback from the EA community. And I hope that going forward, CEA will prioritize thorough post-mortems (that include stakeholder input) on completed projects, so that the entire community can learn as much as possible from them.
* Simple example: with the benefit of hindsight, it seems likely that EAV significantly overinvested in developing a complex evaluation model before the project launched, and that EAV’s staff may have had an inflated sense of their own expertise. From the EAV website at its launch:
Some info here: https://youtu.be/Y4YrmltF2I0?t=157
Thanks! “Celebrating Failed Projects” also nicely characterises my motivation for actually making this comment, rather than letting it slide.
Where are these recommendations? Do you have YC-style Request For Startups somewhere or something? I can’t find them. “EA-aligned projects” doesn’t seem like a very concrete signal (the suggestion “start an EA-aligned project” is a lot less actionable than e.g. “replace email” or whatever the new RFSs are).
We have a list of these projects that we’ve been circulating to interested people privately. After we’ve had a chance to add any additional funders on the basis of this blog post, we’ll add the list projects we think are interesting and the list of projects funders think are interesting to the website.
Cool, I look forward to seeing it.
As someone who hasn’t even yet completed an undergraduate degree, and hasn’t been superb at networking opportunities, I’m not confident my network is great. Still, though, from getting involved in effective altruism and other intellectual circles, I’ve met an eclectic bunch of people. I’ll suggest contacting EA Ventures, or submitting an application, to those running projects, and others I know. I intend to make suggestions at my own discretion of who I believe might be effective. I’ve read in full both the evaluation process EA Ventures will be using, and also it’s own self-evaluation. So, I think I have a mental grasp on what heuristic criteria you’re using, and I’ll try thinking along those lines when pondering the potential of others I know. Honestly, I think I’ll screen off most people in my network by the time I’m finished. Beyond that, I’m assuming EA Ventures will itself be able to determine whether someone I put in touch with it is a good fit for working with the organization in some capacity.
Questions
What would you qualify as expertise? And, what are your “fields of interest?”
Are you interested in existing projects which could produce value for a flourishing world, and do a lot more of it if they received funding from EA Ventures?
In finding entrepreneurs to work with, is EA Ventures looking for any particular qualities, interests, or aptitudes?
Are there any geographical limitations for whom EA Ventures can fund at this point?
I don’t know how to qualify experience. I think I’ll know it when I see it. Fields of interest are anything important, uncrowded and tractable. Areas that we find particularly promising are animal rights, far future/existential risk projects and global poverty.
Yes, but we focus on new projects or projects with only a little bit of traction. Established projects have their own funding mechanisms and networks, so we don’t have as much value to add there.
We’re looking for people who have outstanding skills along the attributes we mention in our team composition model. I would guess that the EA community is most scarce on persuasion but that’s just a guess.
No as such. Some funders may only be able to fund certain projects for tax reasons, but I think our network is vast enough to transcendent nationality.
I think there is an important (and contrary to your evaluation, not so crowded) opportunity here, and I’m glad EA Ventures is getting going.
Here’s something which confuses me about your model. Perhaps you can clarify.
Your evaluation procedure seems to estimate how good projects are, and then reduce this to a binary fund / don’t fund. How do you decide where this threshold is? For VC recommendations there’s a natural threshold at things with expected returns better than market returns. Are you picking a baseline, so that you suggest funding precisely the things that you think are better uses of funds (in expectation) than common donation targets such as e.g. AMF? Or are you making a guess about the amount of funding you will attract, and choosing a threshold based on that? If the latter, wouldn’t it be more helpful to just provide a ranked list?
A related question:
When you recommend funding a project, will you recommend a funding level? Presumably many projects will at many points have diminishing expected returns on extra funding. Will EA Ventures be aiming to make judgements about this curve in its recommendations?
Our threshold for funding is set at GiveWell-recommended charities. Namely, if we don’t think a project is plausibly better than e.g. AMF we plan to not recommend the project.
This is because a pernicious failure mode for the project is that we move money away from good proven projects and towards bad, unproved projects. By only recommending projects that could (in expectation) be better than AMF, we can mitigate the pernicious failure mode.
In terms of funding level, we ask how much money the projects need and what they plan to do with it in the application. We also plan to ask about this in the future. The goal is to ensure that the projects have room for more funding. We don’t plan to recommend specific funding levels, but I can see us doing this if donors would find it valuable.
Also, to clarify on the crowdedness of the project, I could see our uncrowdedness ranking improving as we learn more about the funding space. It’s certainly plausible that the project will turn out to be uncrowded.
Thanks. To be sure I’m reading that right: you mean projects that you think are better in expectation than AMF, or that you think someone might reasonably think is better in expectation than AMF?
I expect that if most/all of your recommendations get funded, it would be useful to have recommendations for the amount of funding until they are in expectation worse than AMF at the margin. If not all your recommendations get funded, it would be useful to have extra ranking between them. It may be that donors are happy making these judgements, but just as you are likely to have comparative advantage identifying the projects, you’ll probably also be well placed to identify funding requirements or trade-offs between your recommendations.
Projects that the EAV team and expert evaluators think might be better in expectation than AMF. I used the other phrasing because we do two stages of evaluation. At the first, we discard projects that “are not plausible better in expectation than AMF” where that means that it is not plausible that further evidence will show the project to be more worth funding.
We should talk on Skype about how to accurately model the crossover point between when a project is better than AMF and worse than AMF given certain amounts of funding. I agree that this would be valuable, but I don’t yet know how to determine this.
This is a great initiative, and a helpful write—up. Thanks Kerry.
So you want to find ventures that are expected to be better than the most effective charity (or thereabouts) in the world?
I’m a bit worried that you will rule out many fantastically valuable ventures that may be discouraged or not stimulated from happening otherwise.
If these ventures were to use only EA funds or mainly EA funds, then that would be right.
However, if the ventures have a (lets say 10% chance) of growing out of the EA world and getting funding that wouldn’t otherwise be attracted, and are only 1⁄5 as effective as AMF, but when they do they last for 40 years and wouldn’t be done otherwise, then it could still be worth giving them?
Further, the learning from the process might be worth something significant if its a necessary barrier to becoming an uber-effectiveness incubator?
Obviously you want to take the highest expected value anyway so this might be an academic discussion.
Haha, had a look at the people behind this—forget what I said—I’m sure that between all the funders/backers you’ve got more than enough learning to identify projects that are better than AMF. Good luck!
I think you’re selling yourself short by saying that EA Ventures is in a crowded area, and by saying that it does not deserve funding—to me, given the kinds of funds you could expect to move, it seems non-crazy to spend funds on EAV directly. And presumably this is already happening, with people like yourself and Tyler working for CEA while spending time on this project—something that should be supported and perhaps upscaled.
Thanks for the vote of confidence :-)
To be clear, I think it’s entirely possible that it will make sense to ask for funds for EAV in the future. Right now, I think a) the evidence isn’t strong enough and b) we can gain the evidence we need without spending donor money.
Does this mean that you wouldn’t recommend funding another project at a similar stage to EA Ventures, with a similarly robust case? It seems to me that it would be appropriate to fund it a small amount, in a similar way that you are in effect funding EA Ventures a small amount (with your time) to learn more. Are you planning to avoid funding at that scale?
I agree. If it didn’t seem that EA Ventures could gather the evidence it needed absent money then I would probably be in favor of a small amount of funding to launch the project and gather data. Since the members of the founding team have funding elsewhere this wasn’t necessary and no further funding seemed necessary.
Still, I wonder if funds could speed up EAV’s growth and learning.
This may be the case if the number of applications we get exceeds our expectation and we need to pay people to help us evaluate them.
Congrats on launching! This is super interesting so I have a bunch of questions. I’ve split them up into multiple comments for ease of threading.
First, from your website:
How are the weights actually computed? How was the model fit? On what dataset? How does the score influence your recommendations?
I’m going to be posting the full equation on the website in the near future. It’ll be easier to answer in-depth questions about the process after that has been posted.
The evaluation process includes an assessment of the importance of each characteristic to the project at hand which determines the weightings. So, if our raters assess persuasion as being particularly important to the project at hand, the weighting of persuasion in the overall score will be greater. This allows our weightings to be adaptable to the details of the project at hand. We also weight the assessment of the importance of a variable to the project by the expertise of the evaluater.
Right now, a good score in the evaluation process is necessary but not sufficient for a project to be funded. This is because I expect to significantly update the details of the evaluation process as we review our inaugural round of applicants.
I imagine Ben would give robust criticism for this before or after it is posted. Presumably it’s better for that to happen before?
I’m picturing a simple linear model that is based on arbitrary weights. I’ve not read the literature here but if this can improve decision-making (like fitted models, which certainly can) then it would be an impressive fact.
I’d love to be a part of the discussion of this equation. I was just going to wait patiently but am speaking up in case it’s taken to email. :)
If it’s taken to email I’ll include you on the list :-)
I think it would be great to discuss it on the EA forum, both from the point of view of transparency, and because it’s a much better medium for multi-threaded discussion. (But I understand if you’d rather keep it private if it’s not very refined right now.)
We use simple linear models all the time in investment; they are actually quite good. Best of all they are robust. Like Owen I would love to discuss this.
For example, today I was trying to predict some property of companies. I came up with 5 signals I can easily calculate which all capture some information about the underlying property, turned them into 5 binary indicators, and just added them together for a composite signal. No attempt at producing weights, but for various reasons I’m pretty happy with this approach, and I’m confident my boss would endorse it too if we went into details.
It looks like there’s evidence for using them to predict continuous variables using continuous inputs, which might be your case. Also, if you’re using it to supplement your personal decision-making, then on the face of it, that’s more likely to work well than using it as a substitute.
http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/2te
The book linked to in the evaluation process page on the website suggests that a linear model where the sign is determined and the weights are random beats expert judgment.
I can’t get to the book. Is there any more information about the experiment?
You can read it here. The money pages are 63-64.
Thanks. Looks like the original experiment is here.
Just looking at the abstract, it seems like the article is describing a situation where you have numerical inputs, which doesn’t map perfectly to EA Ventures: “This article presents evidence that even such improper linear models are superior to clinical intuition when predicting a numerical criterion from numerical predictors.”
It seems to me that for many of the organizations you fund, if they wouldn’t otherwise be funded by other people, a verdict of “wait for more evidence” will have approximately the same impact as a verdict of “do not fund.” EAV seems somewhat special in that its founders’ funders are aligned with their goals enough to be fine with them spending substantial time on the project. I hope you take this into account when making recommendations for other organizations.
I agree.
In cases where a project is promising but not yet ready to receive funding, we can define the conditions under which we might fund the project. For example, if a project lacks the necessary technical talent, but is otherwise promising we might provide this feedback in the evaluation. I think this kind of conditional feedback might make it significantly easier for the project to acquire the necessary talent because it comes with a strong likelihood of funding associated with it.
Over time we can also gain the necessary expertise to positively impact the plans of the entrepreneurs. In the way that 80K measures career plan changes, we might measure venture plan changes as an additional success metric.
You need to specify how and when you’re going to get the evidence. It’s like if a doctor said “Wait for evidence of diabetes” and just left it at that. Instead, they specify whether they’re going to a) do a test b) watch and wait, with scheduled follow-up c) do nothing. And that seems right for EA Ventures too. For startups, you can’t really not reject them, not do a test and not follow-up.
I completely agree and this is basically our plan with respect to feedback for entrepreneurs.
From the site:
What’s the nature of your collaboration with YC and Full Circle?
In both cases we expect project sourcing to be the primary benefit of the collaboration. We plan to pass interesting projects along to YC and FC and vice-versa.
What does it mean for YC to pass “interesting” projects on to you? Like, you guys are a row in their database of funders that they give to their portfolio companies? They refer nonprofits to you if they don’t think they’re a good fit for YC? Something in-between?