Centre for Effective Altruism (CEA): an overview of 2017 and our 2018 plans
This post is cross-posted from the Centre for Effective Altruism (CEA) blog. By posting to the EA Forum we hope to increase visibility on our work and give everyone the ability to comment and ask questions.
This has been a big year for the Centre for Effective Altruism (CEA). We launched EA Grants, which funded 21 projects in the EA community; we created a new donation platform called EA Funds to help people donate more effectively, and we ran three EA Global conferences to bring the community together.
In this post, we share what CEA has been doing this year, and give you a taste of the things we will be working on in 2018.
This post includes:
CEA’s mission and vision
Highlights from 2017
A brief review of 2017 and plans for 2018 by team
A non-exhaustive list of our mistakes and plans for improvement this year
Information on our current funding situation
An invitation to join our supporter mailing list for monthly updates
For general inquiries, please contact Kerry Vaughan, who is in charge of our Individual Outreach Team. If you would like to discuss any parts of our plans in more depth, please reach out to the relevant team lead (first name [at] centreforeffectivealtruism.org)
CEA’s Mission and Vision
CEA aims to solve the most important problems, regardless of which species, location, or generation they affect. By doing this, we build towards our vision of an optimal world.
Due to the scale of the potential impact, our current best guess is that work to improve the long-term future is likely to be the best way to help others. However, we think that there is a good chance that we are wrong. For this reason, we also want to continue to devote resources towards finding better ways to address the world’s biggest problems; as a part of this, we want to learn more about problems we may not yet be paying enough attention to.
Improving the world’s long-term trajectory will be very difficult. We believe that long-scale change cannot be solved by individuals. Instead, it requires a community working together. We currently think that such a community particularly needs people who are very engaged with these ideas, and who are able to do full-time research or policy work in the relevant areas. As such, we focus on attracting and supporting these highly-engaged and skilled people.
This explains our mission:
Create a global community of people who have made helping others a core part of their lives, and who use evidence and scientific reasoning to figure out how to do so as effectively as possible.
CEA Highlights from 2017
The year started CEA going through Y Combinator’s. startup accelerator program. Y Combinator is a startup incubator that provides seed funding and advice to startups. We were one of the few non-profits to be accepted to their three-month program, which gave us access to one-on-one advice from their founders. It was during this time that we built EA Funds, a platform that allows users to pool their money with like-minded donors so that fund managers can direct the money to the best giving opportunities in different cause areas. We often talk about the EA community needing money, talent, and ideas in order to succeed. A time when we had access to some of the most successful entrepreneurs seemed like the best time to build a product focused on money.
While EA Funds was perhaps our highest profile project, our EA Grants program in the summer attracted over 700 applicants. We wanted to find ways to support the EA community in innovative new projects, and after careful evaluation, we decided to fund 21 projects.
In October, we set up CEA’s individual outreach program, which aims to help people get deeply involved with the effective altruism community more quickly, through one-on-one mentoring.
Most recently, we launched the Giving What We Can pledge campaign. Our focus has been getting current members to review where they donate and to encourage people to think seriously about the career-long pledge.
Internally, we have consolidated and built capacity. Tara MacAulay, previously our COO, moved into the CEO role, which better reflected the work she had been doing for some time. Will MacAskill moved from his CEO role to become president, and he will now focus on academic and public engagement roles. Around the same time, we consolidated into five teams, with five team leaders:
1. Community (Larissa Hesketh-Rowe),
2. Operations (Miranda Dixon-Luinenberg),
3.Tech (Sam Deere),
4. Research (Max Dalton),
5. Individual Outreach (Kerry Vaughan).
Over the course of the year, this management capacity helped us to grow the team, from 17 at the beginning of the year to 21 at the end of the year.
Below we give more details of what each team has done in 2017 and their plans for 2018.
Research Team
The Research Team aims to communicate new and important ideas to the effective altruism community. Organizations, academics, and independent researchers within the effective altruism community produce valuable research, but such research can be difficult to find or apply; we want to make that process easier, so people can use and explore the ideas we have.
There were several changes to research at CEA during 2017.
First, we discontinued the Philanthropic Advising Team in February 2017. This project was experimental, and while the team had some success in providing advice to philanthropists, the returns were not competitive with other projects, so we decided to end the project in order to be more focused. Our policy work moved to a more natural home at the Future of Humanity Institute (FHI), again to allow us to focus on other projects.
As planned, the Global Priorities Institute, a project that we incubated in 2016 and the first half of 2017, became a part of the University of Oxford.
Finally, the former Fundamentals Research Team, which had operated separately from other parts of CEA, fully merged with the rest of the organization in May. This allows us to coordinate more easily, which is especially important given that the team’s focus has shifted toward communicating some of the key ideas in the community rather than conducting our own research. The team is now Max Dalton and Stefan Schubert, and it is advised by Owen Cotton-Barratt from FHI.
Research Team Activity in 2017
For the first half of the year, the Fundamentals Research Team was focused on producing new cause prioritization research. This included work to clarify important concepts like diminishing returns and cause neutrality. It also included discussion of community norms and why we should be especially wary of hard-to-reverse decisions.
In the second half of the year, we focused more on communicating existing research and ideas in effective altruism. We noticed that there were many ideas that were unpublished or scattered across a variety of personal blogs. We wanted to make these ideas more accessible to people who want to more deeply engage with effective altruism. We published a series of cause profiles on the long-term future, animal welfare, global health, and effective altruism community building. We also rewrote the_Introduction to Effective Altruism_, and we transcribed and collated some of the best recent research in effective altruism on our resources page. The resources page is intended to be a beta version of our 2018 project, which is a series of research articles covering some of the key ideas in effective altruism. We hope this series will quickly introduce key concepts to those who are new to effective altruism.
In addition, we hosted several research fellows over the summer, supporting them to produce original research and training them in relevant skills. Most of this research was posted on the EAforum (e.g., Danae Arroyos-Calvera on DALYs, Alex Barry and Denise Melchin on the Causal Networks Model, and Emily Tench on Community Norms).
Impact Review
It is generally difficult to evaluate the impact of research, since many of the effects are indirect.
Overall, while our more exploratory work earlier in the year seemed somewhat useful, we found that we were not able to reliably generate important new considerations. This was why are now focusing more on communicating existing ideas that are not widely shared. We are particularly excited about changes we made to the content of www.effectivealtruism.org. We think it now offers a good summary of current thinking in effective altruism; the website should be clearer for people new to the community and a good reference for more established community members.
We also think that running the research fellowship was worthwhile. Not only did the summer fellows produce useful research, but they developed skills and understanding that will allow them to have a greater impact in the future.
Plans for 2018
In 2018, we hope to make research in effective altruism even more accessible.
Our key project will be a series of articles, to be published on EffectiveAltruism.org, which will give an overview of current thinking in effective altruism and our approach to cause prioritization. Although there are many good ideas in the community, many are unpublished or published in an obscure place. We believe that we can provide a lot of value by making these ideas more accessible, as well as by creating a common reference work for the community.
We will also continue to improve other aspects of the content and design of EffectiveAltruism.org. We will further support the intellectual community working on effective altruism. This will include working with the tech team on the design of a new Effective Altruism Forum, and supporting more communication and collaboration between professional EA researchers.
Individual Outreach Team
The goal of the Individual Outreach Team is (1) to identify people within the effective altruism community that we expect will make big contributions to important projects and (2) to help them to have a greater impact.
This is a new team, and we are experimenting with this concept because of two considerations:
The heavy-tailed distribution thesis: It seems plausible that the distribution of impact is “heavy-tailed” in that a small number of people might provide a significant amount of the value that the community creates.
Self-sorting: People tend to interact with others who they perceive are similar to themselves.
If both of these claims are true, then the way in which we have focused our community building efforts may be missing some people. At present, engagement with EA usually involves getting new people involved with the EA community through local groups, events like EA Global, and online discussion forums. However, it seems plausible that the next Peter Singer or Nick Bostrom will be seeking a very specific peer group, and thus may not get involved with the community via local groups or effective altruism conferences alone.
The individual outreach team believes that it is important to identify and connect the people with the largest potential for impact, even if these people are not interacting with the EA community through our standard forums. That is why we’re putting more emphasis on developing our ability to make individual connections between people who we can help and who may be able to move the needle on important problems.
Individual Outreach Team Activity in 2017
The Individual Outreach Team was formed in September 2017, so the team is relatively new. However, here are some of the activities that we have engaged in so far.
Meetings at EA Global: London
We reviewed applications to EA Global and organized short, one-to-one meetings with around one-third of all EA Global attendees during the conference.
During these meetings, we looked for ways that we could deliver value to attendees by helping them improve their plans, get a better sense of the landscape of EA projects, and meet others working on similar things.
Post-EAG: London retreat
We held a small retreat for around 30 highly-engaged community members after EA Global London. We aimed to help attendees think through their careers and to discuss with one another.
We thought the retreat would be successful if one of the attendees made a major update to their plans. In fact, we saw four major plan changes, as well as a number of less significant plan changes.
EA Grants
The Individual Outreach Team has taken on EA Grants, which was previously established by members of the Research Team. EA Grants aims to provide funding for high-impact projects, especially those that may not be funded by organizations like the Open Philanthropy Project (Open Phil). We allocated £370,000 to 21 projects in our first round of Grants. More details can be found here.
Impact Review
Our main impact this quarter came from the post-EAG retreat. Immediately after the retreat, four attendees made significant changes to their plans. (Three people moved from earning to give to direct work, and one changed their research trajectory.) In addition, a number of other attendees engaged in more modest plan improvements.
It is worth noting that not all of these changes were entirely due to the retreat. Some might have happened anyway, but happened sooner due to the retreat, and some changes might turn out to be less valuable than we anticipate. Nevertheless, the level of changes individuals made vastly exceeded our expectations.
Since this is a new project, one of our key goals is to learn more. Meeting with hundreds of people at EA Global helped us to get rapid feedback on how we could be more helpful. The results of the November retreat indicated that retreats may be a useful mechanism for allowing people to develop their plans quickly.
It is difficult to assess the full impact of the grants we distributed at this stage. However, there appear to be some early wins:
The new LessWrong website is being well used (see strategy document here).
David Denkenberger, one of our grant recipients, has been publishing work we funded to the EA Forum, and he has also set up a new organization working on related issues.
Some of the people we funded for machine learning research are producing publications, and gained opportunities, based on the work that we funded.
Plans for 2018
Grants: We are planning to run EA Grants throughout 2018, with an anticipated budget of around £2m. There are some changes from the last round.
First, we plan to accept applications year-round with quick reviews and responses for urgent applications and quarterly reviews for less urgent applications.
Second, we plan to move the evaluation processes even further in the direction of mostly evaluating the merits of the applicants themselves rather than their specific plans. This is because:
We expect most plans to be relatively speculative and therefore subject to change;
We are time and resource-constrained in how continuously we can monitor projects, so we need to make sure we have high confidence in grantees; and
We do not think we can develop expertise in all possible projects, but we can develop expertise in evaluating the applicants.
Finally, we plan to move further in the direction of a hits-based giving approach, using EAGrants to place bets on risky, unusual, or controversial projects that seem plausibly very valuable in expectation.
Retreats: We plan to work with the Community Team to run more retreats throughout the year, with a target of running approximately one per quarter. We also plan to experiment more with different formats and activities during these events.
Mentoring: We plan to mentor promising community members on a weekly or bi-weekly basis as a way of gaining more in-depth feedback on how we can help people accomplish their goals more quickly.
Community team
CEA believes that the EA community could be an important way to influence the world’s long-term trajectory for the better. We believe that a tightly coordinated group of people, working together, can have much more of an impact than each individual working alone. This means that — beyond the money, talent, and ideas that we often discuss as being necessary for success — we also need to be able to coordinate as a community.
The Community Team at CEA works to encourage that coordination. We facilitate some of the spaces where the community comes together, both online and in-person (sometimes in partnership with other organizations in the community). We also work to improve cooperation by helping to shape community norms and culture. We advise local groups, hold events, support the Giving What We Can community, moderate online discussions and mediate disputes.
As such, we have a lot of cross-over with the other teams at CEA. Many of the people that our Individual Outreach Team looks to mentor come from local groups, and our events are a great way for individuals to get one-on-one advice.
Although the Community Team was created in the summer of 2017, many of our projects have been running in some form for much longer.
Below is a summary of the Community Team’s accomplishments this year and plans for the future.
Events
Events in 2017
This year, we ran three EA Global conferences, with over 1,600 attendees in total. Our conference in Boston focused on the frontiers of research in EA, our San Francisco event was focused on the EA community, and in London, we experimented with focusing more on existing community members and less on introductory content. The unifying theme of the three conferences was “doing good together.”
Alongside these three conferences, we also ran two smaller external events (one with the Individual Outreach Team and one with leaders of EA organizations) and three internal offsite events for CEA staff. We also supported local communities in running four EAGx events: EAGxMadison, EAGxPhilly, EAGxAustralia and EAGxBerlin. In total, there were approximately 500 attendees.
You can find details of many of the larger external events that we have run on our website, and videos of many of the talks are available on our YouTube channel.
Impact Review
Our EA Global conferences focused on existing members of the EA community and helping them improve their plans, their commitment, and their understanding of EA. In our surveys, we, therefore, asked about the goals of attendees, their engagement, what they learned, and whether their plans changed. Unfortunately, in efforts to gather better data, we changed our survey questions between events, which made comparisons harder. Of respondents from our Boston event survey, 92% learned something new, and 14% of respondents said our San Francisco conference would lead them to make significant plan changes. From London, 28% of survey respondents expect to make major plan changes, including changing direction within a field (25%) or completely changing cause areas (3.1%).
Beyond the benefit to individuals, there are also community-wide benefits to increasing cooperation. The longer-term effects are hard to pinpoint, but we were pleased with how well this year’s theme of doing good together seemed to go at the conferences.
We are currently conducting a more in-depth analysis of our EA Global London data, including conducting interviews with some attendees who either had large plan changes or no plan changes from attending.
As mentioned in the Individual Outreach Team section above, our smaller events seem to have helped individuals make significant plan changes and helped organizations coordinate.
Plans for 2018
We plan to experiment with further small events this year. These events will help with the Individual Outreach Team’s work and provide us with faster feedback loops for learning about the best event formats than our larger events. These events include a small AI strategy retreat in January and a local group leaders retreat.
Given the higher numbers of applicants and attendees for this year’s San Francisco and London EA Global events than for the Boston event, we are considering having two 2018 EA Global conferences: one in London and one in the San Francisco Bay Area in the US.
Giving What We Can
Giving What We Can Activities in 2017
Giving What We Can is a community of people who have pledged to donate 10% of their income over the course of their careers to the most impactful organizations that they can find. This year was our first full year of running Giving What We Can as a project within CEA, rather than it running as a separate organization. This transition saw the consolidation of the Giving What We Can Trust into CEA and a new president of the project. In June, Julia Wise (a member of the Community Team at CEA) took the role of president of Giving What We Can.
This year, we particularly focused on emphasizing the seriousness of the commitment when taking the Pledge, encouraging more people to use Try Giving, as a way to make short-term commitments to giving before you take the lifetime pledge. We have also been working to improve the effectiveness of donations through the creation of EA Funds, a platform to make donating to effective causes easier. This is now the home of the Giving What We Can pledge form and will soon house My Giving, our donation-tracking platform. All of this will make it easier for members to donate and to record whether or not they have fulfilled their pledge. Many members are already donating through the EA Funds and their donation history will automatically be added to their My Giving record. We expect the new combined system to provide more complete and reliable information about the community’s pattern of donations and pledge follow-through than we have had in the past.
We also focused on improving the community’s understanding of the pledge, with a forum post clarifying common misconceptions and a talk on the pledge at EA Global Boston.
We celebrated reaching 2,500 members with a reception in San Francisco and 3,000 members with a reception in London.
Impact Review
At the time of writing, we have had 848 new members join in 2017 (a 35% increase on the 2430 members at the beginning of the year). During the same period last year, we had 850 new members, which was a 58% increase on the initial 1460 that year. This slow-down in the rate of growth reflects our change from emphasizing recruitment of new members to emphasizing the Pledge as a serious lifetime commitment to be thoroughly considered.
Plans for 2018
Our main priorities for 2018 include getting the new platform behind Giving What We Can and EA Funds running so that we can get better data on member donations. We will also update the Giving What We Can website to more accurately reflect the range of cause areas our members care about and to reflect more current information about past and project giving by members.
Local Groups
Local Groups Activity in 2017
This year has seen a shift in focus for the local group support provided by CEA. Particularly this academic year, we have been dedicating more time to giving more in-depth support to the most established groups rather than more basic, blanket support to all groups. In part, we were able to do this because of the local group support offered by Rethink Charity and EAF.
We realize that the EA community has grown a lot, but historically, we have put more energy into that growth than into supporting the people we already have in the community to deepen their engagement. Our focus on a smaller number of groups is in part to rectify that.
It seems that the most engaged EA Groups may provide many times more value than an average group, so we have sharpened our focus on the most engaged groups for that reason.
We have provided internships at the CEA Oxford office for group leaders with a particular focus on one-to-one support as they develop projects for their groups. With support from volunteers, we provide ongoing video call support for 35 groups as part of a mentoring program run in conjunction with EA Cambridge. We referred 25 people to 80,000 Hours coaching this academic year and given $40,000 in funding to EA Groups. If you run a local group, we strongly encourage you to apply for funding via the EA Groups page.
Impact review
We have not yet conducted a systematic review of EA Groups support for 2017. This is partly due to our changing priorities in 2017 meaning changing ways of measuring success.
At the start of 2017, GWWC pledges were the core metric for EA groups, but as mentioned above, we changed our approach, emphasizing the Pledge as a serious lifetime commitment. It therefore no longer felt appropriate for this to be the main metric for local groups.
We spent the summer planning new projects, and this academic year, 80,000 Hours coaching referrals is now being used by EA groups support as the main metric. The numbers on this went well, but further activities to generate coaching referrals were postponed, as 80,000 Hours had less coaching capacity during their review period. Of our 25 referrals, 15 people either have already or will receive coaching. Of those, one is now spending a month working at FHI, one is doing contract work for CEA, and another will be attending a follow-up event. It is too early to tell the long-term implications of this, but we have helped some local group members gain new experience and improve their plans.
We hope to gain additional information from the Rethink Charity LEAN impact evaluation. Some of their initial findings are summarized here. This report includes information from an EAgroups survey that Rethink Charity, EAF and CEA collaborated on.
Plans for 2018
We plan to continue our focus on the larger EA groups by running a retreat for group leaders and a series of internships. The aim of these will be to bring group leaders up to speed on our thinking and to give groups time to trial project ideas with dedicated support from us. We are currently looking into providing funding to support some groups to professionalize with full-time, paid local group organizers.
We are close to publishing an EA community building guide with our most up-to-date thinking on how local groups can best help their members have an impact.
Community Health
Community Health Activities in 2017
Helping the EA community thrive is a key part of the Community Team’s work. We try to improve online discourse, provide resources for handling common issues in local groups, and reduce risks to the EA community. This includes a number of activities such as:
Having Julia Wise serve as a point person to collate information from around the community about problems that arise, such as people acting badly toward others in the community. A point person who addresses such problems reduces the risk of several community members independently experiencing a problem but not thinking that their individual experience is worth acting on, or not being in a good position to act on it.
Encouraging more active moderation of EA Facebook groups by their respective moderators to reduce divisive “flamewar” style discussions and to steer toward civil, productive discourse.
Managing the EA Forum, which CEA took over responsibility for this year.
Providing resources to local EA groups, such as a training on handling protests at speaker events and a guide on hosting journalists at local events.
We have also been working on proactive approaches to community health, such as creating the EA Guiding Principles, to which many EA organizations have added their support, in order to help the effective altruism community stay true to its best elements. We’ve tried to shape community norms through content at EA Global about self-care, diversity, and making local groups more welcoming. In December, we have been working with the Research Team to do a review of the potential risks to the EA community and ways to mitigate them.
Impact review
Success in this work generally looks like members of the EA community not noticing problems that have been averted, so the impact is hard to see. However, we think that, given the value of the EA community, reducing risks to the community is important. We have not previously conducted a systematic review, so our recent work with the Research Team involves identifying areas we can track to understand our progress in the future.
Plans for 2018
Our plans for 2018 are to continue responding to problems that arise in the community, while exploring ways we can be more proactive in preventing these problems. This particularly includes our work with the Research Team to identify the biggest risks to the EA Community and possible steps to reduce such risks.
If you would like to discuss any of the Community Team’s work, please contact Larissa Hesketh-Rowe (larissa@centreforeffectivealtruism.org), as feedback is always welcome.
Operations Team
The Operations Team at CEA supports the effectiveness of other teams.
Like the other teams at CEA, the operations team as a group of people with their own manager, metrics, and team members, is relatively new as our operations were previously managed part-time by CEA staff with other roles. All four members of the current Operations Team joined CEA in 2017. Having a full team has allowed other staff to fully concentrate full-time on CEA’s other projects (such as event management) rather than additionally having to do operations work.
The Operations Team manages all of CEA’s financial and legal needs and lends invaluable support to some of CEA’s largest projects, such as logistics at EA Global conferences, setting the financial and legal framework for EA Grants and managing the hiring and retention of CEA’s staff.
Operations Team Activities in 2017
This year we:
Hired and on-boarded 9 new staff.
Set up an entirely new office in Berkeley.
Made renovations to the Oxford office to help our staff be more productive, including standing desks, noise-cancelling partitions, faster Wi-Fi, and daylight lamps.
Provided logistical support for EA Global conferences, Leaders Forum, and team retreats.
Dealt with budgets, contracts, grants, and payments necessary for the ongoing function of CEA. This included managing approximately ten times the volume of donations as last year.
Enabled the functioning of both EA Funds and EA Grants by investigating legal risks and requirements, tracking donation information, working with lawyers to write up contracts, corresponding with donors, and paying out grants to recipients.
Successfully completed an audit for CEA UK.
Acquired H1B cap exemption, making it easier for CEA to hire people in the future.
Acquired visas for many staff so they were able to move to the US.
Impact assessment
Our operations work is vital for the functioning of CEA, and this year has allowed us to scale up recruitment, finances and office space. There are still some areas where we can improve, however, including:
Increasing the timeliness with which we deal with donor and other enquiries
Updating our accounting so that it is transparent outside of CEA and faster to audit.
Plans for 2018
Our main priority in 2018 is to build capacity so that we can continue to scale both in terms of staff and in donations. This means building more robust financial, legal and HR processes that suit the organization in its current, larger form (e.g., better communication protocols with accountants, better expensing procedures for employees, and better-documented processes for operational procedures.)
Overall, 2017 was largely a year of creation for the Operations Team itself and 2018 will be focused on improving what has been built to meet the needs of the organization’s size and scope.
Tech Team
The Tech Team provides online infrastructure and technical advice to other teams, maintaining and building software to help the EA community be more effective and to increase CEA’s operational efficiency. EA Funds is now also managed by the Tech Team.
The team scaled up significantly this year, from one employee to four, adding two developers and a product manager. This should allow us to significantly reduce development times and provide more capacity to work on more complex projects.
Tech Team Activities in 2017
Our participation in Y Combinator allowed us to build useful networks and build skills. During Y Combinator, we developed the idea of EA Funds. Our work culminated in the project’s release in March 2017.
EA Funds has received over $2m in donations to its philanthropic funds to date, and of this amount, regranted around $1.1m. In addition to the domain-expert-managed philanthropic funds, EA Funds has also been serving as a centralized donation gateway for donors to give to EA-aligned organizations. Donors can easily set a preferred allocation to any combination of the Funds and to any of the non-profit organizations currently supported on the platform. (This includes all GiveWell top charities, most standout charities, and several EA“meta” organizations). So far, a further approximately $600,000 has been donated through the platform in this manner.
EA Funds is part of EffectiveAltruism.org, which we are building out as our flagship online platform. The platform has expanded to include the Giving What We Can Pledge, and tools for easier discovery and management of local groups. These products are still in early stages of user testing, but they have proven that the architecture of the web app can be used for a broad range of purposes. This will eventually provide members of the EA community with a single login for accessing a range of core online services (Funds donations, Giving What We Can or Try Giving pledge, and My Giving record, local group memberships, and eventually EA Forum and EA Global ticketing), all in one place.
One of our key achievements this year was to drastically increase our capacity for next year. The key bottleneck prior to the hiring round was having all aspects of CEA’s online infrastructure managed by a single individual. This hiring round allowed us to professionalize, and the greater division of responsibilities means that individual products receive much more dedicated attention.
Impact Review
Our most obvious success this year is EA Funds, which grew from an idea in January to a widely used platform by the end of the year. We processed 10,000 donations from almost 2,500 individual donors, totaling ~$2.6m. We see EA Funds as a key piece of community infrastructure, as well as a well-tested springboard to launch new projects that provide value to the community.
While we consider EA Funds to be a successful project, there remains considerable room for improvement. We should have prioritized building systems that provide more regular insight into the amount of money in each fund. This would have benefited both the fund managers (who do not currently have an easy way to check how much is in their respective funds at any given time) and would increase the community’s trust in EA Funds by providing greater transparency. In particular, we did not publish grant payout reports on the website as quickly as we should have. We are updating our processes to address these issues, and we have prioritized creating a unified dashboard where donors can find more information about the current balance of each fund, with work expected to commence in Q1 2018.
Prioritization has been difficult. There is a tension between:
Business as usual (e.g., GWWC Pledge upgrade, maintaining EA Funds)
Capacity building (e.g., projects for the Operations team)
Improved community infrastructure (e.g., new EA Forum, event ticketing)
It seems important to day to day work done, even though it often seems lower priority than improving infrastructure. We would like to spend more time in 2018 clarifying how to make these trade-offs so that we focus on the development work that adds the most value.
Plans for 2018
The Tech Team will continue to support all other CEA teams in achieving their goals, listen closely to community and user feedback, and develop projects that help EAs become more effective.
We are broadly following the ‘agile development’ model, which involves (1) seeking regular input from other teams on what to prioritize and (2) building tight feedback loops so that we can test hypotheses and make course corrections. The below are our best guesses at priority projects for 2018 but are subject to change and reprioritization as we get input from colleagues and users.
EA Funds
A reporting solution for the Fund Managers and EA Funds users, improving the transparency of each fund’s takings and payouts
Donor lottery functionality (beta to be released mid-December, further tweaks expected in subsequent runs of the lottery)
Potential expansion of EA Funds on offer and investigation of different models for running and using funds
Automation of payroll giving
Inclusion of PayPal as a supported payment gateway
Effectivealtruism.org web app
Bringing the current content on EffectiveAltruism.org into the web app, thus consolidating almost all our products under one login
Various experience and design improvements across the web app
Internal efficiency and monitoring software
An improved admin portal for EffectiveAltruism.org for use by the operations team to streamline the day-to-day administration of donations, regrants and customer support
A dashboard for the operations team focused on their key metrics
Community team Customer Relationship Management (CRM) solution, either built in-house or an off-the-shelf solution integrated with our systems. The goal is to have a solution that allows us to present relevant data from all sources in one place.
Focus on improving our internal analytics processes
Official roll-out of the EA Groups platform
We have user testing planned for early January and intend to use feedback received to finalize the platform. We expect a wide release later that month.
Finalizing the migration of Giving What We Can site functionality to effectivealtruism.org
The Pledge has already been migrated to effectivealtruism.org.
Migration of the My Giving dashboard. This will automatically import donations through EAFunds. It will allow users to report and monitor their incomes, donations, and Pledge adherence. This is due late 2017/early 2018.
Migration of existing My Giving users to the new system
EA Forum
Finalize the handover of the current EA Forum’s codebase from Trike Apps (current maintainers)
Investigate options for building a new community discussion platform and execute on the plan that comes out of this process
Develop in-house event management software
Central ticketing system for EA Global/EAGx events, improving user experience, reducing accounting time, and reducing reliance on often-inadequate third-party event management software.
CEA’s Mistakes
We have of course made mistakes. While some of these are covered in our sections reviewing impact, we felt it was important to clearly note our shortcomings here, too. A few of the more-significant ways we can improve which we have identified that cut across multiple projects are as follows:
We aspire to high standards of transparency, so the community and our donors know what we’re doing and so other actors in effective altruism are able to make informed decisions about how to interact with CEA and the services we provide. However, we have not always lived up to this standard. In some cases, we prioritized moving on to new projects before sufficiently communicating our current thinking regarding existing ones. This is one of the motivations for writing this post and for encouraging interested community members to sign up to our supporters mailing list, but we still need to do more in this area.
We also could have put more emphasis on ensuring that when staff move between projects, they clearly hand over their reporting requirements.
In some cases, we also had trouble communicating strategy or plans for restructuring internally. This caused stress and reduced productivity for some staff members.
In a few cases, we were poor at communicating hiring decisions to applicants.
Below is a non-exhaustive list of shortcomings, organized by project team:
Research Team
Towards the beginning of the year, we failed to have a sufficiently focused research agenda. This was part of the motivation for the integration of the research team with the rest of CEA mid-year.
In the second half of the year, we failed to make enough progress on producing original content, partly because we were splitting our time between this, strategy work, supporting EA Grants, and collating content for EffectiveAltruism.org. We plan to be more focused on original content in the coming year.
In some cases, we should have spent more time planning projects for summer research fellows, and we should have encouraged summer fellows to share their ideas and collaborate with each other more than we did this year. We will be carefully assessing whether and how to run any research fellowships in the future.
Individual Outreach Team
EA Grants
Note that we have included this section under the Individual Outreach Team because they now run EA Grants, but this was not the case for most of this year.
Our communication around EA Grants was confusing. We initially announced the process with little advertisement. Then, we advertised it on the EA Newsletter, but only shortly before the application deadline, and extended the deadline by two days.
We underestimated the number of applications we would receive, which gave us less time per candidate in the initial evaluation than we would have liked. It also caused delays, which we did not adequately communicate to applicants. We should have been less ambitious in setting our initial deadlines for replying, and we should have communicated all changes in our timetable immediately and in writing to all applicants.
Our advertisement did not make sufficiently clear that we might not be able to fund educational expenses through CEA. Fortunately, the Open Philanthropy Project was receptive to considering some of the academic applicants.
Tech
A delay in implementing some of the recurring payment processing logic in EA Funds meant that users who created recurring payments before May did not have their subscriptions processed. The issue has since been fixed and recurring payments have been working normally since mid-May. We did not charge make-up payments, which meant several thousand dollars worth of payments that should have been processed were not. We informed donors as soon as we were aware of the issue. However, a more robust prioritization process could probably have avoided the issue.
A bug in our message queue system meant that some payment instructions were processed twice. Due to poor timing (an audit, followed by a team retreat), the bug was not discovered for several days, leading to around 20 donors being charged for their donations twice. As soon as the fault was discovered, we notified donors and refunded their payments. We now periodically inspect the message queue to ensure messages are being delivered correctly, and we have implemented an additional layer of deduplication logic.
A failure to perform server maintenance on Giving What We Can’s server caused it to intermittently stop responding to network connections. This caused many people frustration as they tried to log in or take the Pledge. Due to ongoing issues with this system, and prioritizing other projects, we did not identify the cause of the fault as fast as we should have. This functionality has since been migrated to the EffectiveAltruism.org, and the server will be decommissioned soon.
We failed to keep the EA Funds website up to date, meaning that many users were unsure how their money was being used. With the arrival of Marek Duda as product manager, we are now addressing this. We are planning to publish a further post, by the end of 2017, with a deeper dive on the EA Funds platform.
Our hiring process took longer than we anticipated because we had to develop a process for technical hiring rounds. We think that we have now learned how to run such a round in the future.
Community Team
Events
Our advertising around EA Global events, especially the London conference, was confusing. We made the decision to shift towards more advanced content aimed at existing members of the community midway through the five-month application period. This led to confusion about the intended audience for the conference and newer members of the EA community who were previously encouraged to attend our events felt shut out. This is a not good way of welcoming people to our community. In the future, we will make sure that our advertising, admissions, and content for events is more consistent and communicated further in advance.
We did not provide enough support to organizers of EAGx conferences. We hope that increased capacity on the team, through hiring a new events specialist and handing operations work to the Operations Team, will help with this.
Local groups
Communication with EA Group Organizers should have been more frequent and more reliable. For example, many student group organizers during EAG London reported having an unclear understanding of the goals of EA groups and CEAs’ thinking in general. The EA Community building guide is intended to address this, but it is yet to be published. Speed of communication should have been prioritized over depth here. There have also been occasions where group leaders have been left waiting because of CEA. For example, the launch of the EA Groups platform has been delayed multiple times while we have been building the capacity in our Tech Team.
Our work on local groups was at times insufficiently focused. In some cases, we tried several approaches, but not long enough to properly assess whether they had succeeded or not. For example, we began a beta version of an EA Conversations platform to facilitate conversations between EAs but discontinued work on it despite initial success, largely because of competing time demands. We have been using a quarterly goal setting and review process to try and improve this.
Giving What We Can
We failed to keep some of the content on this website up-to-date, as some of the figures we use have changed. Similarly, while the Pledge is cause-neutral and no longer focused solely on the developing world our website doesn’t fully reflect that. We plan to address these issues in 2018.
At times, particularly earlier in the year, we focused too much on promoting the main Pledge, even though this might not be the right option for some people. For this reason, we have shifted more emphasis to Try Giving.
Community health
In responding to existing problems, we have not prioritized preventing or mitigating other problems as much as we could have. We have commissioned the Research Team to produce a report on proactive things we can do to measure and address community health, and we intend to be more proactive on this in 2018.
Operations
As discussed above, occasionally, the team was slow to respond to donor and other enquiries. We have been building our team’s knowledge, capacity and processes to try and improve this. For example, earlier in the year access to donor information to respond to enquiries was restricted to a few staff members, slowing our response times.
We still need to improve our accounting, so that it is more transparent to people outside CEA, and to reduce the time that audits take up.
Funding
Our current funding situation is secure, with approximately two years of runway. However, we are planning to scale up some activities in 2018, including expanded granting through EA Grants and via local groups, as well as a larger program of events. Given our growth, it is also likely we will need to make new hires during the year. While much of this expenditure will be covered by larger donors, we are also fundraising to make sure we have diversity in our sources of income (especially since some of our funding agreements are contingent on us having multiple backers). If you would like to support CEA, please donate using this link.
Conclusion
We hope that this post has given you some insight into our work this year and our plans for 2018. As our mission is to support the EA community in doing the most good we can, we want to keep you in the loop and hear your feedback.
If you would like to receive our monthly supporters emails for more regular updates on our work, please sign up here. If you would like to discuss any of our plans in depth, please contact the team lead listed at the beginning of this post or comment here.
- Red Teaming CEA’s Community Building Work by 1 Sep 2022 14:42 UTC; 296 points) (
- Which Community Building Projects Get Funded? by 13 Nov 2019 20:31 UTC; 99 points) (
- EA Hotel with free accommodation and board for two years by 4 Jun 2018 18:09 UTC; 99 points) (
- Where I am donating this year and meta projects that need funding by 2 Mar 2018 13:42 UTC; 11 points) (
Thanks for the update, much appreciated.
I only have a question in one area: could you say a bit more about how the individual outreach team will find people and how it might try to help them? Maybe I’m misreading this, but there’s something worryingly mysterious and opaque about there being someone in CEA who reaches out to ‘pick winners’ (in comparison to, say, having a transparent, formal application process for grants which seems unobjectionable).
One worry (which I’m perhaps overstating) is this might lead to accidental social/intellectual conformism because people start to watch what they do/say in the hope of the word getting out and them getting ‘picked’ for special prizes.
Good question. I agree that the process for Individual outreach is mysterious and opaque. My feeling is that this is because the approach is quite new, and we don’t yet know how we’ll select people or how we’ll deliver value (although we have some hypotheses).
That said, there are two answers to this question depending on the timeline we’re talking about.
In the short run, the primary objective is to learn more about what we can do to be helpful. My general heuristic is that we should focus on the people/activity combinations that seem to us to be likely to produce large effects so that we can get some useful results, and then iterate. (I can say more about why I think this is the right approach, if useful).
In practice, this means that in the short-run we’ll work with people that we have more information on and easier access to. This probably means working with people that we meet at events like EA Global, people in our extended professional networks, EA Grants recipients, etc.
In the future, I’d want something much more systematic to avoid the concerns you’ve raised and to avoid us being too biased in favor of our preexisting social networks. You might imagine something like 80K coaching where we identify some specific areas where we think we can be helpful and then do broader outreach to people that might fall into those areas. In any case, we’ll need to experiment and iterate more before we can design a more systematic process.
I would be also worried. Homophily is of the best predictors of links in social networks, and factors like being member of the same social group, having similar education, opinions, etc. are known to bias selection processes again toward selecting similar people. This risks having the core of the movement be more self encapsulated that it is, which is a shift in bad direction.
Also I would be worried with 80k hours shifting also more toward individual coaching, there is now a bit overemphasis on “individual” approach and too little on “creating systems”.
Also it seems lot of this would benefit from knowledge from the fields of “science of success”, general scientometry, network science, etc. E.g. when I read concepts like “next Peter Singer” or a lot of thinking along the line “most of the value is created by just a few peple”, I’m worried. While such thinking is intuitively appealing, it can be quite superficial. E.g., a toy model: Imagine a landscape with gold scattered in power-law sized deposits. And prospectors, walking randomly, and randomly discovering deposits of gold. What you observe is the value of gold collected by prospectors is also power-law distributed. But obviously the attempts to emulate “the best” or find the “next best” would be futile. It seems open question (worth studying) how much some specific knowledge landscape resembles this model, or how big part of the success is attributable to luck.
That’s a nice toy model, thanks for being so clear :-)
But it’s definitely wrong. If you look at Bostrom on AI or Einstein on Relativity or Feynman on Quantum Mechanics, you don’t see people who are roughly as competent as their peers, just being lucky in which part of the research space was divvied up and given to them. You tend to see people with rare and useful thinking processes having multiple important insights about their field in succession—getting many thing right that their peers didn’t, not just one as your model would predict (if being right was random luck). Bostrom has looked into half a dozen sci-fi looking areas that others looked to figure out which were important, before concluding with xrisk and AI, and he looked into areas and asked questions that were on nobody’s radar. Feynman made breakthroughs in many different subfields, and his success looked like being very good at fundamentals like being concrete and noticing his confusion. I know less about Einstein, but as I understand it to get to Relativity required a long chain of reasoning that was unclear to his contemporaries. “How would I design the universe if I were god” was probably not a standard tool that was handed out to many physicists to try.
You may respond “sure, these people came up with lots of good ideas that their contemporaries wouldn’t have, but this was probably due to them using the right heuristics, which you can think of as having been handed out randomly in grad school to all the different researchers, so it still is random just on the level of cognitive processes”.
To this I’d say that, you’re right, looking at people’s general cognitive processes is really important, but I think I can do much better than random chance in predicting what cognitive processes will produce valuable insights. I’ll point to Superforecasters and Rationality: AI to Zombies as books with many insights into which cognitive processes are more likely to find novel and important truths than others.
In sum: I think the people who’ve had the most positive impact in history are power law distributed because of their rare and valuable cognitive processes, not just random luck, and that these can be learned from and that can guide my search for people who (in future) will have massive impact.
Obviously the toy model is wrong in describing reality: it’s one end of the possible spectrum, where you have complete randomness. On the other you have another toy model: results in a field neatly ordered by cognitive difficulty, and the best person at a time picks all the available fruit. My actual claims roughly are
reality is somewhere in between
it is field-dependent
even in fields more toward the random end, there actually would be differences like different speeds of travel among prospectors
It is quite unclear to me where on this scale the relevant fields are.
I believe your conclusion, that the power law distribution is all due to the properties of the peoples cognitive processes, and no to the randomness of the field, is not supported by the scientometric data for many research fields.
Thanks for a good preemptive answer :) Yes if you are good enough in identifying the “golden” cognitive processes. While it is clear you would be better than random chance, it is very unclear to me how good you would be. *
I think its worth digging into an example in detail: if you look a at early Einstein, you actually see someone with an unusually developed geometric thinking and the very lucky heuristic of interpreting what the equations say as the actual reality. Famously special relativity transformations were written first by Poincare. “All” what needed to be done was to take it seriously. General relativity is a different story, but at that point Einstein was already famous and possibly one of the few brave enough to attack the problem.
Continuing with the same example, I would be extremely doubtful if Einstein would be picked by selection process similar to what CEA or 80k hours will be probably running, before he become famous. 2nd grade patent clerk? Unimpressive. Well connected? No. Unusual geometric imagination? I’m not aware of any LessWrong sequence which would lead to picking this as that important :) Lucky heuristic? Pure gold, in hindsight.
(*) At the end you can take this as an optimization problem depending how good your superior-cognitive-process selection ability is. Let’s have a practical example: You have 1000 applicants. If your selection ability is great enough, you should take 20 for individual support. But maybe its just good, and than you may get better expected utility if you are able to reach 100 potentially great people in workshops. Maybe you are much better than chance, but not really good… than, maybe you should create online course taking in 400 participants.
I share your caution on the difficulty of ‘picking high impact people well’, besides the risk of over-fitting on anecdata we happen to latch on to, the past may simply prove underpowered for forward prediction: I’m not sure any system could reliably ‘pick up’ Einstein or Ramanujan, and I wonder how much ‘thinking tools’ etc. are just epiphenomena of IQ.
That said, fairly boring metrics are fairly predictive. People who do exceptionally well at school tend to do well at university, those who excel at university have a better chance of exceptional professional success, and so on and so forth. SPARC (a program aimed at extraordinarily mathematically able youth) seems a neat example. I accept none of these supply an easy model for ‘talent scouting’ intra-EA, but they suggest one can do much better than chance.
Optimal selectivity also depends on the size of boost you give to people, even if they are imperfectly selected. It’s plausible this relationship could be convex over the ‘one-to-one mentoring to webpage’ range, and so you might have to gamble on something intensive even in expectation of you failing to identify most or nearly all of the potentially great people.
(Aside: Although tricky to put human ability on a cardinal scale, normal-distribution properties for things like working memory suggest cognitive ability (however cashed out) isn’t power law distributed. One explanation of how this could drive power-law distributions in some fields would be a Matthew effect: being marginally better than competing scientists lets one take the majority of the great new discoveries. This may suggest more neglected areas, or those where the crucial consideration is whether/when something is discovered, rather than who discovers it (compare a malaria vaccine to an AGI), are those where the premium to really exceptional talent is less. )
Examples are totally worth digging into! Yeah, I actually find myself surprised and slightly confused by the situation with Einstein, and do make the active predictions that he had some strong connections in physics (e.g. at some point had a really great physics teacher who’d done some research). In general I think Ramanujan-like stories of geniuses appearing from nowhere are not the typical example of great thinkers / people who significantly change the world. If I’m I right I should be able to tell such stories about the others, and in general I do think that great people tend to get networked together, and that the thinking patterns of the greatest people are noticed by other good people before they do their seminal work cf. Bell Labs (Shannon/Feynman/Turing etc), Paypal Mafia (Thiel/Musk/Hoffman/Nosek etc), SL4 (Hanson/Bostrom/Yudkowsky/Legg etc), and maybe the Republic of Letters during the enlightenment? But I do want to spend more time digging into some of those.
To approach from the other end, what heuristics might I use to find people who in the future will create massive amounts of value that others miss? One example heuristic that Y Combinator uses to determine who in advance is likely to find novel, deep mines of value that others have missed is whether the individuals regularly build things to fix problems in their life (e.g. Zuckerberg built lots of simple online tools to help his fellow students study while at college).
Some heuristics I use to tell whether I think people are good at figuring out what’s true, and make plans for it, include:
Does the person, in conversation, regularly take long silent pauses to organise their thoughts, find good analogies, analyse your argument, etc? Many people I talk to take silence as a significant cost, due to social awkwardness, and do not make the trade-off toward figuring out what’s true. I always trust the people more that I talk to who make these small trade-offs toward truth versus social cost
Does the person have a history of executing long-term plans that weren’t incentivised by their local environment? Did they decide a personal-project (not, like, getting a degree) was worth putting 2 years into, and then put 2 years into it?
When I ask about a non-standard belief they have, can they give me a straightforward model with a few variables and simple relations, that they use to understand the topic we’re discussing? In general, how transparent are their models to themselves, and are the models general simple and backed by lots of little pieces of concrete evidence?
Are they good at finding genuine insights in the thinking of people who they believe are totally wrong?
My general thought is that there isn’t actually a lot of optimisation process put into this, especially in areas that don’t have institutions built around them exactly. For example academia will probably notice you if you’re very skilled in one discipline and compete directly in it, but it’s very hard to be noticed if you’re interdisciplinary (e.g. Robin Hanson’s book sitting between neuroscience and economics) or if you’re not competing along even just one or two of the dimensions it optimises for (e.g. MIRI researchers don’t optimise for publishing basically at all, so when they make big breakthroughs in decision theory and logical induction it doesn’t get them much notice from standard academia). So even our best institutions at noticing great thinkers with genuine and valuable insights seem to fail at some of the examples that seem most important. I think there is lots of low hanging fruit I can pick up in terms of figuring out who thinks well and will be able to find and mine deep sources of value.
Edit: Removed Bostrom as an example at the end, because I can’t figure out whether his success in academia, while nonetheless going through something of a non-standard path, is evidence for or against academia’s ability to figure out whose cognitive processes are best at figuring out what’s surprising+true+useful. I have the sense that he had to push against the standard incentive gradients a lot, but I might just be false and Bostrom is one of academia’s success stories this generation. He doesn’t look like he just rose to the top of a well-defined field though, it looks like he kept having to pick which topics were important and then find some route to publishing on them, as opposed to the other way round.
For scientific publishing, I looked into the latest available paper[1] and apparently the data are best fitted by a model where the impact of scientific papers is predicted by Q.p, where p is “intrinsic value” of the project and Q is a parameter capturing the cognitive ability of the researcher. Notably, Q is independent of the total number of papers written by the scientist, and Q and p are also independent. Translating into the language of digging for gold, the prospectors differ in their speed and ability to extract gold from the deposits (Q). The gold in the deposits actually is randomly distributed. To extract exceptional value, you have to have both high Q and be very lucky. What is encouraging in selecting the talent is the Q seems relatively stable in the career and can be usefully estimated after ~20 publications. I would guess you can predict even with less data, but the correct “formula” would be trying to disentangle interestingness of the problems the person is working on from the interestingness of the results.
(As a side note, I was wrong in guessing this is strongly field-dependent, as the model seems stable across several disciplines, time periods, and many other parameters.)
Interesting heuristics about people :)
I agree the problem is somewhat different in areas not that established/institutionalized where you don’t have clear dimensions of competition, or the well measurable dimensions are not that well aligned with what is important. Loooks like another understudied area.
[1] Quantifying the evolution of individual scientific impact, Sinatra et.al. Science, http://www.sciencesuccess.org/uploads/1/5/5/4/15543620/science_quantifying_aaf5239_sinatra.pdf
I copied this exchange to my blog, and there were an additonal bunch of interesting comments there.
I have a very similar concern to Michael’s. In particular it looked like, to me, that participants picked for this were people with whom CEA had an existing relationship. For example picking from CEA’s donor base. This means that participants were those that had a very high opportunity cost in moving to direct work (as they were big donors). I expect that this is a suboptimal way of getting people to move into direct work.
Look forward to seeing:
This is also worried me because I’m under the impression that, when selecting based on intuition, assessors look for applicants who remind them of themselves (“He’s just like me when I was his age!”) If the individual outreach team are similar to the rest of the community, relying on their intuitions could make our diversity problems worse.
That is an excellent update. The strategic directions broadly make sense to me for all of the teams, and I, like many people, am really happy with the ways CEA has improved over the last year.
One item of feedback on the post: the description of mistakes is a bit long, boring, and over-the-top. Many of these things are not actually very important issues.
One suggestion re the EA Forum revamp: the lesserwrong.com site is looking pretty great these days. My main gripes—things like the front being slightly small for my preferences—could be easily fixed with some restyling. Some of their features, like including sequences of archived material, could also be ideal for the EA Forum use case. IDK whether the codebase is good but recall that the EA Forum was originally created by restyling LessWrong1, so the notion of stealing that code comes from a healthy tradition! Also, This last part is probably a bit too crazy (and too much work), but one can imagine a case where you post content (and accept comments) from both sites at once.
That aside, it’s really appreciated that you guys have taken the forum over this year. And in general, it’s great to see all of this progress, so here’s to 2018!
Yeah, we have talked to the LW 2.0 team a bit about the possibility of using their codebase as a starting point or possibly doing some kind of actual integration, but we’re still in the speculative phase at this point :)
I agree both with Ryan’s overall evaluation (this is excellent) and that the ‘mistakes’ section, although laudable in intent, errs slightly too far in the ‘self-flagellatory’ direction. Some of the mistakes listed either seem appropriate decisions (e.g. “We prioritized X over Y, so we didn’t do as much Y as we’d like”), or are the result of reasonable decisions or calculations ex ante which didn’t work out.
I think the main value of publicly recording mistakes is to allow others to learn from them or (if egregious) be the context for a public mea culpa. The line between, “We made our best guess, it turned out wrong, but we’re confident we made the right call ex ante” and “Actually, on reflection, we should have acted differently given what we knew at the time” is blurry, as not all decisions can (or should) be taken with laborious care.
Perhaps crudely categorising mistakes into ‘major’ and ‘minor’ given magnitude, how plausibly could have been averted, etc., and putting the former in updates like these but the latter linked to in an appendix might be a good way forward.
Would love to see LW2.0 become the new code base, but it still undergoing rapid changes at the moment and isn’t completely stable.
Sure, although the tech team could presumably just wait six months while they work on other stuff.
This is fantastic. Thank you for writing up. Whilst reading I jotted down a number of thoughts, comments, questions and concerns.
.
ON EA GRANTS
I am very excited about this and very glad that CEA is doing more of this. How to best move funding to the projects that need it most within the EA community is a really important question that we have yet to solve. I saw a lot of people with some amazing ideas looking to apply for these grants.
1
I think it is quite plausible that £2m is too low for the year. Not having enough funding increases the costs to applicants (time spent applying) and you (time spent assessing) relative to the benefits (funding moved), especially if there are applicants above the bar for funding but that you cannot afford to fund. Also I had this thought prior to reading that one of your noted mistakes was “underestimated the number of applications”, it feels like you might still be making this mistake.
2
Interesting decision. Seems reasonable. However I think it does have a risk of reducing diversity and I would be concerned that the applicants would be judged on their ability to hold philosophise in an academic oxford manner etc.
Best of luck with it
.
OTHER THOUGHTS
3
Could CEA comment or provide advise to local group leaders on if they would want local groups to promote the GWWC pledge or the Try Giving pledge or when one might be better than the other? To date the advise seems to have been to as much as possible push the Pledge and not Try Giving
4
I do not like the implication that there is a single answer to this question regardless of individual’s moral frameworks (utilitarian / non-utilitarian / religious / etc) or skills and background. Where the mission is to have an impact as a “a global community of people...” the research should focus on supporting those people to do what they has the biggest impact given their positions.
5 Positives
This is a good thing to have picked up on.
I am glad this is a team
I think it is good to have this written up.
6
It would have been interesting to see an estimates for costs (time/money) as well as for the outputs of each team.
.
WELL DONE FOR 2017. GOOD LUCK FOR 2018!
That’s fair. My thinking in choosing £2m was that we would want to fund more projects than we had money to fund last year, but that we would have picked much of the low-hanging fruit, so there’d be less to fund.
In any case, I’m not taking that number too seriously. We should fund all the projects worth funding and raise more money if we need it.
I haven’t thought about this much, but a natural strategy is to try to have a budget sufficiently large that you know you’ll definitely be able to fund all the good projects, and then binary search down to the amount that only funds all the good projects.
I’d also be interested to find out what happens if CEA announces they’re budgeting like 5 million for this, and see if there are any good projects that appear when that much money is potentially available in the community. Naturally CEA neednt give it all away.
(But right now I’d expect most of the best projects are just 1-3 people’s full time salaries for a small team to work together, so each grant being <200k at most.)
Added: On the margin I’d expect the most useful thing EA Grants could do would be to offer multi-year grants, so people in the community can consider major careeer changes based on what’s most effective rather than what’s most stable.
Thanks for writing this.
On EA Grants: Will you allow individuals to fund EA Grants in the future? This could either be letting individuals add to CEA’s pot of funding for grants, publishing the rejected grants so that individuals can fund them independently or putting the applications on EA funds.
On EA Funds:
What types of funds and models might this investigation include?
We probably won’t raise EA Grants money from more than a handful of donors. I think we can secure funding from CEA’s existing donor base and the overhead of raising money from multiple funders probably isn’t worth the cost.
That said, there are two related things that we will probably do:
We’ll probably refer some promising projects to other funders. We did this last round for projects that we couldn’t fund for legal reasons and for projects where existing funders had more expertise in the project than we did.
We’ll probably send applicants that were close to getting funding but didn’t to other funders that might be interested in the project.
Just wanted to note that most of our staff are out of the office for the next few days, but will answer when they return!