Request for comments: EA Projects evaluation platform

Edit: It is likely there will be a second version of this proposal, modified based on the feedback and comments.

The effective altruism community has a great resource—its members, motivated to improve the world. Within the community, there are many ideas floating around, and entrepreneurially-minded people keen to execute on them. As the community grows, we get more effective altruists with different skills, yet in other ways it becomes harder to start projects. It’s hard to know who to trust, and hard to evaluate which project ideas are excellent , which are probably good, and which are too risky for their estimated return.

We should be concerned about this: the effective altruism brand has a significant value, and bad projects can have repercussions for both the perception of the movement and the whole community. On the other hand, if good projects are not started, we miss out on value, and miss opportunities to develop new leaders and managers. Moreover, inefficiencies in this space can cause resentment and confusion among people who really want to do good and have lots of talent to contribute.

There’s also a danger that as a community we get stuck on the old core problems, because funders and researchers trust certain groups to do certain things, but lack the capacity to vet new and riskier ideas, and to figure out which new projects should form. Overall, effective altruism struggles to use its greatest resource—effective people. Also, while we talk about “cause X”, currently new causes may struggle to even get serious attention.

One idea to address this problem, proposed independently at various times by me and several others, is to create a platform which provides scalable feedback on project ideas. If it works, it could become an efficient way to separate signal from noise and spread trust as our community grows. In the best case, such a platform could help alleviate some of the bottlenecks the EA community faces, harness more talent and energy than we are currently able to do, and make it easier for us to make investments in smaller, more uncertain projects with high potential upside.

As discussed in a previous post, What to do with people, I see creating new network-structures and extending existing ones as one possible way to scale. Currently, effective altruists use different approaches to get feedback on project proposals depending on where they are situated in the network: there is no ready-made solution that works for them all.

For effective altruists in the core of the network, the best method is often just to share a google doc with a few relevant people. Outside the core, the situation is quite different, and it may be difficult to get informative and honest feedback. For example, since applications outnumber available budget slots, by design most grant applications for new projects are rejected; practical and legal constraints mean that these rejections usually come without much feedback, which can make it difficult to improve the proposals. (See also EA is vetting-constrained)

For all of these reasons, I want to start an EA projects evaluation platform. For people with a project idea, the platform will provide independent feedback on the idea, and an estimate of the resources needed to start the project. In a separate process, the platform would also provide feedback on projects further in their life, evaluating team and idea fit. For funders, it can provide an independent source of analysis.

What follows is a proposal for such a platform. I’m interested in feedback and suggestions for improvement: the plan is to launch a cheap experimental run of the evaluation process in approximately two weeks. I’m also looking for volunteer evaluators.

Evaluation process

Project ideas will get evaluated in a multi-step process:

1a. Screening for infohazards, proposals outside of the scope of effective altruism, or otherwise obviously unsuitable proposals (ca 15m /​ project)

1b. Peer review in a debate framework. Two referees will write evaluations, one focusing on the possible negatives, costs and problems of the proposal; and the other on the benefits. Both referees will also suggest what kind of resources a team attempting the project should have. (2-5h /​ analyst /​ project)

1c. Both the proposal and the reviews will get anonymously published on the EA forum, gathering public feedback for about one week. This step will also allow back-and-forth communication with the project initiator.

1d. A panel will rate the proposal, utilizing the information gathered in phases b. and c., highlighting which part of the analysis they consider particularly important. (90m /​ project)

1e. In case of disagreement among the panel, the question will get escalated and discussed with some of the more senior people in the field.

1f. The results will get published, probably both on the EA projects platform website, and on the forum.

In a possible second stage, if a team forms around a project idea, it will go through similar evaluation, focusing on the fit between the team and the idea, possibly with the additional step of a panel of forecasters predicting the success probability and expected impact of the project over several time horizons.

Currently, the plan is to run a limited test of the viability of the approach, on a batch of 10 project ideas, going through steps 1a-f.

Why this particular evaluation process

The most bottlenecked resource for evaluations, apart from structure, is likely the time of experts. This process is designed to utilize the time of experts in a more leveraged way, utilize the inputs from the broader community, and also to promote high-quality discussion on the EA forum. (Currently, problematic project proposals posted on the forum often attract downvotes, but rarely detailed feedback.)

Having two “opposing” reviews attempts to avoid the social costs of not being nice: by having clear roles, everyone will understand that writing an analysis which tries to find flaws and problems was part of the job. Also, it can provoke higher quality public discussion.

Splitting steps b.,c. and d. is motivated by the fact that mapping arguments is a different task than judging them.

Project ideas are on a spectrum where some are relatively robust to the choice of team, while the impact of other projects may mostly depend on the quality of the team, including the sign of the impact. By splitting the evaluation of ideas from the evaluation of (idea+team), it should be possible to communicate opinions like “this is a good idea, but you are likely not the best team to try it” with more nuance.

Overall the design space of possible evaluation processes is large, and I believe it may just be easier to run an experiment and iterate. Based on the results, it should be relatively easy to make some steps from 1.a-e simpler, omit them altogether, or make them more rigorous. Also the stage 2 process can be designed based on the stage 1 results.

Evaluators

I’m looking looking for 5-8 volunteer analysts, who will write the reviews for the second step (1b) of the process. The role is suitable for people with similar skills to generalist research analyst at OpenPhil, such as:

Expected time-commitment is about 15-20h for the first run of evaluations, and if the project continues, about 15-20h per month. The work will mostly happen online in a small team, communicating on Slack. There isn’t any remuneration, but I hope there will be events like a dinner during EA Global, or similar opportunities to meet.

Good reasons to volunteer

  • you want to help with alleviating an important bottleneck in the EA project ecosystem

  • the work experience should be useful if you are considering working as a grant evaluator, analyst, or similar

Bad reasons to volunteer

you feel some specific project by you or your friends was undeservedly rejected by existing grant-making organizations, and you want help the project

Strong reason not to volunteer

  • there is a high chance you will flake out from voluntary work even if you commit to do it

If you want to join, please send your linkedin/​CV and a short paragraph-long description of your involvement with effective altruism to eaprojectsorg@gmail.com

Projects

In the first trial, I’d like to test the viability of the process on about 10 project ideas. You may want to propose a project idea either where you would be interested in running the project or in cases where you would want someone else to lead the project, with you helping e.g. via advice or funding. At present, it probably isn’t very useful to propose projects you don’t plan to support in some significant way.

It is important to understand that the evaluations absolutely do not come with any promise of funding. I would expect the evaluations may help project ideas which come out with positive feedback from the process, because funders or EAs earning to give or potential volunteers or co-founders may pick up the signal. Negative feedback may help with improving the projects, or having realistic expectations about necessary resources. There is also value in bad projects not happening, and negative feedback can help people to move on from dead-end projects to more valuable things.

Also it should be clear that the project evaluations will not constitute any official “seal of approval” - this is a test run of volunteer project and has not been formally endorsed by any particular organization.

I’d like to thank Max Daniel, Rose Hadshar, Ozzie Gooen, Max Dalton, Owen Cotton-Barratt, Oliver Habryka, Harri Besceli, Ryan Carey, Jah Ying Chung and others for helpful comments and discussions on the topic.