Ideas for Improving Funding for Individual EAs, EA Projects, and New EA Organizations
Background
Shortly after the first EA Tech Initiatives video call this past weekend, I started thinking about a lot of observations I’ve had over the years about useful EA projects not getting funded or facing funding constraints. Several years ago I thought the EA community was highly funding constrained due to these observations. My updated model is that large organizations have significant funding available (for instance, via large Open Philanthropy Project grants), but projects/grants/startups are highly funding constrained, due to the fact that there are no centralized efforts to fund them, and also no promotion of or culture of funding smaller initiatives.
In thinking about funding for EA causes, I created a 2x2 funding grid with “organizations” and “individuals” on one axis, and “existing” and “new” on the other axis. It looks like funding “existing organizations” and “new individuals” joining the movement gets a lot of funding, but funding “new organizations” and improving the efficacy of “existing individuals” in EA gets significantly less funding. It could be worth exploring these two neglected funding areas further.
I generated a lot of ideas before I went online and realized that other EAs have already been discussing the need for better project/grant funding in considerable detail, with this appearing as one of the more recent posts in response to this Facebook thread on the main EA Facebook group.
The following are three ideas to tackle EA project funding in order of most decentralized to least decentralized, followed by a proposed structure for scaling small grants to EAs.
Idea 1: Kickstarter for EA Grant Opportunities
This idea aims to fix the discoverability problem by publicly listing all EA grant opportunities that are currently requesting funding so that donors can view, communicate with, and fund grant proposals that would otherwise not be known to most donors. The EA Forum does not appear to be a place for hundreds of people to post funding requests en masse, and I’m not sure what other medium could be used. My understanding is that most small scale funding is currently done on an informal, interpersonal basis which highly limits the ease and availability of funding.
This platform may run into problems Kickstarter faces regarding problems with lowered project discoverability as a large number of projects get added to the platform, quality control issues/projects being poorly executed, and problems that arise from not having centralized due diligence (anything from outright scams to projects that simply did not realistically have the ability to execute even after meeting funding goals many times over). There is likely a good reason why Indiegogo’s equity crowdfunding system uses Microventures to screen all offerings before they are posted online with an acceptance rate of under 5%, otherwise a lot of scams and projects with low execution capacity could be posted, making the platform unusable for making good investments without investors committing a lot of their time in due diligence.
Idea 2: Distributed Grantmakers
I met someone at EA Global 2018 who is heavily involved in the AI safety space. This person mentioned they were familiar with opportunities to make small high impact grants in AI safety, and were considering using their own money to make these grants, but obviously the amount of available capital for self-financed grants is very limited despite the number of and needs of the grantees. This made me realize that as a donor, I would be very poorly equipped to judge project areas like AI safety which interest me but are outside my area of expertise. If I were to make grants in AI safety, I would want someone like the person I met guiding my donation, not only because they are subject matter experts but also because they are very familiar with small grant opportunities that I would never become aware of because I don’t do direct work in the field and am not networked with hundreds of people in the field.
Two problems with EA Grants are apparent in this September 2017 post.
Problem 1: Evaluating any project is challenging without domain expertise in the project’s area(s)
We found it hard to make decisions on first-round applications that looked potentially promising but were outside of our in-house expertise. Many applicants had proposals for studies and charities we felt under-qualified to assess. Most of those applicants we turned down; some we deferred to the relevant Open Phil program manager. We are in the process of establishing relationships with domain experts who can help us do this in the future.
Problem 2: A consolidated project evaluation process takes a lot of time and effort
We then went through the list by rank and chose applicants to interview, discussing applicants about which there was large divergence in scores or general opinion. Given our £500,000 budget and most of three staff members’ time for two weeks, we decided to interview 63 candidates.
Both of these problems could be eliminated by transitioning away from a model where a very small number of people attempt to evaluate a very large number of projects they do not have any expertise in, to a model oriented around empowering people with existing subject matter expertise and existing knowledge of grant opportunities in their space to give grants to support those opportunities instead of not taking action.
This proposal could run into difficulties with the quality of distributed grantmakers, although a grantmaker evaluation process would be at least an order of magnitude less work than evaluating all of the projects themselves. This proposal could cause a bias towards funding projects that grantmakers are already familiar with, which could exclude many projects from consideration. And of course there are issues like conflicts of interest with grantmakers and grantees or even with the grantmaker’s own projects, feeling obligated to donate to one’s contacts for social reasons even if the idea is poor, and facilitating the entire distributed grantmaking system itself.
Idea 3: Large-Scale Centralized Grantmaking
This idea is most similar to EA Grants, but in the form of a separate organization with a robust grantmaking team.
Due to the current scale of EA Grants, people who write grant applications are not being catered to optimally, and this could be a great reason to fund a separate organization exclusively for the purpose of facilitating grants to smaller initiatives. Rather than have CEA team members work part time on giving grants throughout the year and then switch back to their main area of work, dedicated, specialized grantmaking staff should be hired so that disparate grants can be reviewed by someone with an appropriate background, grantmaking can happen throughout the year, a high number of grants can be assessed, and the project can scale quickly and effectively if needed. Hopefully, having dedicated grantmaking staff means better grants over time as the staff becomes more experienced. Edit: CEA is hiring a full-time grants evaluator, so they appear to be moving in this direction.
Right now, EA Grants is not equipped to do things like give funding to for-profit EA initiatives or fund things like educational expenses. Setting up a new organization with the capacity to do tax-deductible grants, non-tax-deductible grants, and equity/bond impact investments could greatly help with getting different types of projects and activities off the ground.
This proposal has concerns around concentrating grantmaking among very few people, the possibility of insufficient transparency when making grants, and issues about the amount of resources it would take to run such an organization compared to less centralized solutions.
Concept Proposal: Fusing All Three Ideas to Maximize Information Sharing, Grantmaking Efficiency, and Ease of Funding
I think that elements of all three ideas are beneficial and could be included in a more optimal solution for scaling grantmaking. My proposed concept is inspired by how real world venture funds operate, with a few key differences.
The concept starts with a website that has a fully digital grant application process. Applicants create user accounts that let them edit applications, and applicants can choose from a variety of options like having the grant be hidden or publicly displayed on the website, and posting under their real names or a pseudonym. Grants have discussions sections for the public to give feedback. Anonymous project submission help people get feedback without reputation risk and judge project funding potential before committing significant time and resources to a project.
If the applicant opts to make an application public, it is displayed for everyone to see and comment on. Anyone can contact the project creator, have a public or private discussion on the grant website, and even fund a project directly.
The website is backed by a centralized organization that decides which proposals to fund via distributed grantmaking. Several part-time or full-time team members run the organization and assess the quality and performance of grantmakers. EAs in different cause areas can apply to be grantmakers. After an initial evaluation process, beginner grantmakers are given a role like “grant advisor” and given a small grantmaking budget. As grantmakers prove themselves effective, they are given higher roles and a larger grantmaking budget.
While powered by dencentralized grantmakers, the organization has centralized funding options for donors that do not want to evaluate grants themselves. Donations can be tax-deductible, non-tax-deductible, or even structured as impact investments into EA initiatives. Donors can choose cause areas to fund, and can perhaps even fund individual grantmakers.
This model greatly increases awareness for grant opportunities in all areas across effective altruism. It makes it possible for grantees to seek funding from many sources in a centralized location, and for donors to choose their own grants if they want instead of relying on something like EA Grants to make the right grant decisions. Via decentralized grantmaking, this model makes it so that large amounts of funding can be shuttled to giving small grants, with higher average evaluator subject matter expertise and lower evaluator time commitment per grant compared to EA Grants.
Conclusion
Hopefully this post inspires additional thoughts, and more importantly, actions in the area of facilitating additional funding towards grants to EAs, EA projects, and new nonprofit and for-profit EA organizations.
If feedback for this grantmaking idea is sufficiently positive, I am interested in spending time making this idea a reality. Otherwise, hopefully this post provides useful ideas for future grantmaking organizations and programs to consider.
Update
Based on this post’s comments, it seems like further discussion and coordination around improving grantmaking within EA would be beneficial. I’ve created the #ti-funding channel within Rethink Charity’s Slack team to promote greater discussion and coordination around this topic. Further updates to come.
- List of possible EA meta-charities and projects by 9 Jan 2019 11:28 UTC; 74 points) (
- EA could benefit from a general-purpose nonprofit entity that offers donor-advised funds and fiscal sponsorship by 27 Jun 2020 0:16 UTC; 58 points) (
- Announcing the EA Angel Group—Seeking EAs with the Time and Money to Evaluate and Fund Early-Stage Grants by 15 Oct 2018 18:05 UTC; 32 points) (
- 10 Mar 2019 21:15 UTC; 26 points) 's comment on EA is vetting-constrained by (
- Requesting community input on the upcoming EA Projects Platform by 10 Dec 2018 17:41 UTC; 23 points) (
- 5 Apr 2022 20:37 UTC; 9 points) 's comment on Issues with centralised grantmaking by (
- What EA questions do you get asked most often? by 23 Jun 2020 17:12 UTC; 7 points) (
- 23 Jul 2018 16:43 UTC; 6 points) 's comment on The EA Community and Long-Term Future Funds Lack Transparency and Accountability by (
Nice post, Brendon!
I’ve been of the view for the last couple of years that it’d be useful to have more dedicated effort put toward funding EA projects.
I have a factual contributions that should help to flesh out your strategic picture here:
BERI, in addition to EA Grants are funding some small-scale projects. In the first instance, one might want to bootstrap a project like this through BERI, given that they already have some funding available and are a major innovator in the EA support space right now.
OpenPhil does already do some regranting.
EA Ventures attempted, over the course of some months, to do this a few years ago, which you can read at least a bit about here: http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/. I think it failed for a range of reasons including inadequate projects, but it would be worth looking into this further.
Notwithstanding these factors, I still think this idea is worth exploring. As you suggest, I might start off by creating a grant application system. But I think the most important aspects are probably not the system itself as the quality of evaluators and the volume of funders. So it might be best to try to bootstrap it from an existing organization or funder, and to initially accept applications via a low-tech system, such as Google Doc proposals. I’d also emphasise that one good aspect of the status quo is that bad ideas mostly go unfunded at present, especially ones whose low-quality could damage the reputation of EA and its associated research fields, or ones that could inspire hamrful activity. There are more potentially harmful projects within the EA world than in entrepreneurship in general, and so these projects might be overlooked from people taking an entrepreneurial or open-source stance, and this is worth guarding against.
One meta-remark is that I generally like the conversations that are prompted by shared Google Docs, and I think that this generates, on average, more extensive and fine-grained feedback than a Forum Post would typically receive. So if you put out a “nonprofit business plan” for this idea, then I figure a Google Doc (+/- links from the Forum and relevant Facebook groups) would be a great format. Moreover, I’d be happy to provide further feedback on this idea in the future.
Hi Ryan, thanks for sharing information and feedback! I completely agree, practically speaking, spending a long time building something without market feedback/validation is not a good idea, so using an existing way to process applications and operating under an established organization would be a great to way to get started effectively.
I am curious if you have any feedback on the fused proposal that I had in mind, and how to potentially improve the design in order to protect against the possibility of funding low-quality or harmful projects. I was imagining that since there is a discussion section for each proposal, anyone could mention potential problems that could arise from funding a proposal, and donors could check this section for feedback before contributing. Perhaps the benefits from this openness do not exceed the potential harm but it’s difficult for me to assess this.
What does this achieve that Google Docs linked from the EA Forum can’t achieve? I think it should start with a more modest MVP that works within existing institutions and more extensively leverages existing software products.
This sounds good.
I’m not sure what you mean by “centralized funding options”
This sounds good.
I’m grateful that someone wrote this post. :-)
Personally, I find your proposal of fusing three models promising. It does sound difficult to get right in terms of both technical web development and setting up the processes that actually enable users to use the grant website as it was set out to be used. It would probably require a lot of iterative testing as well as in-person meetings with stakeholders (i.e. this looks like a 3-year project).
I’d be happy to dedicate 5 hours per week for the next 3 months to contribute to working it out further with key decision makers in the community. Feel free to PM me on Facebook if you’d like to discuss it further.
Here are some further thoughts on why the EA Grants structure has severe limitations
My impression is that CEA staff have thoughtfully tried to streamline a traditional grant making approach (by, for example, keeping the application form short, deferring to organisations that have expertise in certain areas, and promising to respond in X weeks) but that they’re running up against the limitations of such a centralised system:
1) not enough evaluators specialised in certain causes and strategies who have the time to assess track records and dig into documents
2) a lack of iterated feedback between possible donors and project leaders (you answer many questions and then only hear about how CEA has interpreted your answers and what they think of you 2 months later)
Last year, I was particularly critical about that little useful feedback was shared with applicants after they were denied with a standard email. It’s valuable to know why your funding request is denied – whether it is because CEA staff lack domain expertise or because of some inherent flaws to your approach that you should be aware of.
But applicants ended up having to take the initiative themselves to email CEA questions because CEA staff never got around to emailing some brief reasoning for their decisions to the majority of the 700ish applicants that applied. On CEA’s side there was also the risk of legal liability – that someone upset by their decision could sue them if a CEA staff member shared rough notes they made that could easily be misinterpreted. So if you’re lucky you receive some general remarks and can then schedule a Skype call to discuss those further.
Further, you might discover then that a few CEA staff members have rather vague models of why a particular class of funding opportunities should not be accepted (e.g. one CEA staff member was particularly hesitant about funding EA groups last year because it would make coordinating things like outreach [edit] and having credible projects branded as EA more difficult).
Finally, this becomes particularly troublesome when outside donors lean too heavily on CEA’s accept/deny decision (which I think happened at least once with EA Netherlands, the charity I’m working at). You basically have to explain to all future EA donors that you come into contact with why your promising start-up wasn’t judged to be impactful enough to fund by one of the most respected EA organisations.
I’d be interested in someone from the EA Grants team sharing their perspective on all this.
Thanks for the insight Remmelt! A good way to start this would be to create an MVP much like Ryan Carey suggested so that we can get started quickly, with a prebuilt application system (Google Forms, Google Docs, a forum, etc) and possibly using a DAF or fiscal sponsor. The web app itself could take a while, but having public projects and public feedback in a forum or something would be reasonably close and take much less effort.
I am meeting with someone who has made some progress in this area early next week. Based on traction and the similarity between the other person’s system and this system, I’ll see if a new venture in this space could add value, or if existing projects in this space have a good chance of succeeding. One way or the other I’ll be in touch!
Great! Cool to hear how you’re already making traction on this.
Perhaps EAWork.club has potential as a launch platform?
I’d also suggest emailing Kerry Vaughan from EA Grants to get his perspective. He’s quite entrepreneurial so probably receptive to hearing new ideas (e.g. he originally started EA Ventures, though that also seemed to take the traditional granting approach).
Let me know if I can be of use!
Hi Remmelt, have you joined the Rethink Charity Slack? I can’t seem to find you on there.
I increased my speed of reviewing progress in the space of small project funding. There seems to be one major project related to improving centralized grant funding. 1–2 people are interested in implementing a “Kickstarter for EA projects” at some point in the future but have not started yet. The EA Peer Funding project is essentially “Kickstarter for making grants to individual EAs.” This is the extent of my knowledge based on Skyping with several people in this space. No one has mentioned anything else in the comments section of this post or otherwise.
Since there doesn’t appear to be others in this area yet, I believe moving forward with concept refinement and seeking additional feedback would be a useful next step. Let’s coordinate this via Rethink Charity’s Slack!
One key challenge I see is something like ‘grant-making talent constraint’. The skills needed to make good grants (e.g. good judgement, domain knowledge, maybe tacit knowledge, maybe relevant network, possibly commissioning/governance/operations skill) are not commonplace, and hard to explicitly ‘train’ outside i) having a lot of money of your own to practise with, or ii) working in a relevant field (so people might approach you for advice). (Open Philanthropy’s recent hiring round might provide another route, but places were limited and extraordinarily competitive).
Yet the talents needed to end up at (i) or (ii) are somewhat different, as are the skills to acquire: neither (e.g.) having a lot of money and being interested in AI safety, nor being an AI safety researcher oneself, guarantee making good AI safety grants; time one spends doing either of these things is time one cannot dedicate to gaining grant-making experience.
Dividing this labour (as the suggestions in the OP point towards) seem the way to go. Yet this can only get you so far if ‘grantmaking talent’ is not only limited among people with the opportunity to make grants, but limited across the EA population in general. Further, good grant-makers will gravitate to the largest pools of funding (reasonably enough, as this is where their contribution has the greatest leverage). This predictably leads to gaps in the funding ecosystem where ‘good projects from the point of view of the universe’ and ‘good projects from the point of view of the big funders’ subtly differ: I’m not sure I agree with the suggestions in the OP (i.e. upskilling people, new orgs), but I find Carl Shulman’s remarks here persuasive.
+1 I didn’t spell it out this explicitly, but what I found slightly odd about this post is that infrastructure is not the bottleneck on more grant making, but qualified grant makers.
I propose an infrastructure to generate more active qualified grant makers by making people who are close to qualified/good grantmakers (as Gregory says, good judgement, domain knowledge, relevant network, etc) into grantmakers by giving them the ability to recommend grants from a centralized fund that donors can contribute to in order to fund small projects without the hassle of evaluating dozens of projects themselves, and with the possibility of earmarking funds for specific grantmakers.
I also aim to solve the awareness problem of EA projects that are requesting funding, since EA Grants does not at present have a way for non-CEA staff to learn about possible grants, so only a handful of people can actually assess grants and people that might be great grantmakers are left out. This also requires infrastructure.
That’s fair.
All of your ideas listed are already being worked on by some people. I talked just yesterday to someone who is intending to implement #1 soon, #3 will likely be achieved by handling EA Grants differently in the future, and there are already a couple of people working on #2, though there is further room for improvement.
Would be good to know who these people are to better co-ordinate the community’s efforts (and indeed to see if they’re serious endeavours—I get the feeling a lot of EA projects end up being started and then abandoned, so the fact that someone is already working on it shouldn’t necessarily stop others from doing so).
Agreed! I, for one, would like to know who is handling “Idea 2.” I have talked to several people working on funding small projects and have only heard about Idea 1 and Idea 3. Idea 3 doesn’t seem to have anyone actively working on it, just thinking about it.
Here is a previous post on EA crowdfunding. Not sure if anyone is working on it actively, but maybe it’d be possible to rope the author into a project. Here are some other vaguely related posts:
http://effective-altruism.com/ea/14d/donor_lotteries_demonstration_and_faq/
http://effective-altruism.com/ea/1ey/can_a_transparent_idea_directory_reduce/
http://effective-altruism.com/ea/1k6/the_almighty_hive_will/
https://www.facebook.com/groups/peerfunding/
EA Funds, update, complaint, complaint
EA Hotel (writing this comment from the hotel dining room)
As Henry says, it seems like a lot of EA projects get started and then abandoned. It was just over a year ago that Peter Hurford wrote “I guess another important next step would be learning from why similar things like EA Ventures, Impact Certificates, and the Pareto Fellowship didn’t get more traction and were shut down.” (source). So if we zoom out a bit and view this “small scale EA funding” category broadly, it appears to be littered with abandoned projects.
The same appears to be true for various EA wikis that people have created. The EA community seems to have a very short collective attention span and/or a very high appetite for novelty; people rarely seem to realize that the thing they are trying to do was already proposed or implemented in prototype form by 6 other people before them. I wonder if the lowest-hanging fruit would be to try to write a history of either attempts to provide small-scale EA funding or attempts to create a wiki, interview people who were involved in every failed project, try to discern patterns and debug the problems. In any case, ironically this causes me to update away from funding small-scale stuff a bit, and towards funding any EA organization that’s proven it has some institutional staying power!
A lens which might explain why both EA wikis and EA peer funding are so hard: In both cases, the challenge is to establish a Schelling point. A wiki will have a hard time getting writers if it doesn’t have readers, and it will have a hard time getting readers if it doesn’t have writers. A funding platform will have a hard time getting projects if it doesn’t have funders, and it will have a hard time getting funders if it doesn’t have projects. So in addition to looking at failed attempts to establish Schelling points, it might also be useful to examine successful attempts. Here are some that come to mind:
This forum. I believe this forum evolved out of a group blog which was invite-only, Ryan Carey might know more.
EA Global. EA Global was originally called the EA Summit, and Geoff Anders told me that the first summit nearly did not happen because EA organizations were having trouble coordinating.
Less Wrong 2.0. There was a period of several years where LW was in a state of decline, and every few months someone would write a post about how LW was in decline and how maybe it could get fixed and how someone should really do something about it. Nothing happened until a few people (Matt Graves, Oliver Habryka, Ben Pace, Ray Arnold, others?) got together and got really serious about it, went around talking to lots of different people about why they weren’t writing for Less Wrong, got some funding from EA Grants, etc.
In the last two cases it seems like the Schelling point was established through intensive networking and finding a compromise that achieved everyone’s interests simultaneously. If I had an MBA I would probably call it “building consensus among stakeholders”. People who spend a lot of time thinking things through independently or building infrastructure without getting anyone’s input don’t seem to be as successful. If you want to create a city off in rural Utah, the first step is not to go off and build the city, the first step is to found a religion and wait until it has a bunch of members. (Though I could believe that thinking things through independently and writing up your thoughts might be useful to a networking expert who comes along later to spearhead your project. Same for creating Facebook groups which can later be used as coordination points, e.g. the “New EA hub search and planning” FB group, which gave me an opportunity to promote the EA Hotel. However, I think a failed shot at establishing a Schelling point can actually be harmful if it creates a self-fulfilling prophecy that creating a Schelling point is not feasible.)
Another point is that if someone is already working on something kinda similar to what you are working on, it might be best to glom onto their thing instead of starting your own thing. For example, with LessWrong 2.0, Matt Graves was the person officially in charge of revitalizing LessWrong, but I think he got a big boost when Oliver Habryka and others glommed on to that project. It’s always nice to be the leader so you get to do things your way and make all the important decisions yourself, but the entire challenge with establishing a Schelling point is to coordinate disparate interests. (And by extension, in the same way you yourself are going to be more motivated to work on a project that you feel you have a leadership role in, giving other people leadership roles in your project is maybe a way to get them feeling invested.) So if you’re unable to coordinate with a person who is already working on a similar project, due to networking ability that’s insufficient to discover them or compromise ability that’s insufficient to work with them, you are probably doomed anyway. In general, I think having multiple projects competing for resources is bad, e.g. I think LW 2.0 took off around the time Arbital finally threw in the towel.
(It may be that the most important thing is just to be persistent—in the same way startups are said to be an emotional roller coaster, I’ll bet nonprofit projects are the same way, and the planning fallacy means everything takes longer than expected. Hopefully this comment wasn’t too discouraging!)
Exciting stuff!
Just a note on the EA Wiki (and on project abandonment in general): lots of projects seem to be really badly run. The EA Wiki was offline for months because of server issues, and until recently you couldn’t even register as a new user.
I’m not sure EAs have a shorter attention span than anyone else – I imagine most would maybe try a couple of times to get onto the wiki and then just give up. That’s part of the reason I’m not worried about project duplication: so many efforts are half-baked that we shouldn’t allow one party to have a monopoly on a particular idea.
Hmmm… One thought is that if projects are half-baked due to a shortage of work hours being thrown at them, consolidating all the work hours into a single project might help address the problem. I also think having more people on the project could help from a motivation perspective, if any given project worker feels responsible for fulfilling their delegated responsibilities and is motivated by a shared vision. But ultimately it’s the people who are doing any given project who will figure out how to organize themselves.
Thanks for the information.
The “ideas” were listed more to break down possible implementations than to propose executing all of them in their exact forms. 1 could be incorporated into the new EA Hub perhaps, and I am aware Dony Christie is exploring EA Peer Funding, but perhaps you are referring to other people. I am not familiar with anyone that is working on 2 but I’m happy to hear that this is being worked on in some capacity. Yeah, I agree that improving EA Grants would be a good way to make something like 3 possible, and will likely end up happening.
I believe the exact forms of each listed idea contain problems, and my intended proposal is an attempt to fuse all of the ideas and eliminate weaknesses of implementing an listed idea on its own. I don’t know of any attempts to do a fused approach, but please correct me if I’m wrong. For example, regarding the issue of implementing all three listed ideas in their exact form, centralizing this sort of grant funding much like EA Grants has done could cause many problems. There is currently no grant transparency. A lot of possibly useful projects may have applied, not gotten funded, and then given up, or as Remmelt mentioned, other donors may not support a project because it was not funded by CEA. There is no way for other donors in the community uniquely equipped to evaluate, contribute to, or fund projects to actually see what projects exist in EA and evaluate, contribute to, or fund them. Basically, not only is centralization potentially inefficient, it may have already led to a large number of project failures, some of which may have evolved to become successful, high impact projects with a different grantmaking model.
Seconding alexherwix, unless there are privacy concerns, sharing information about other people working in this space and their ideas would be useful for coordination purposes. Also, early stage projects often don’t work out, so if the project is important enough, then coordinating efforts or perhaps even building the same broad idea with different teams with very different implementations is a good idea in case one team-implementation pairing would succeed, but other team-implementation pairings would not fare well or be highly suboptimal.
I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I’m less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.
Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.
I generally agree but I think there may be value in coordinating different parties and part-solutions under “one roof”. If you are sill in contact with the people interested in this topic, maybe direct them here to get some knowledge exchange and coordination going? Or provide more detailed information about organizations/people interested in the topic so that they can be reached out to :)
Minor point:
They are currently hiring for a full-time EA Grants Evaluator.
Thanks, I have updated the post to reflect this information.
Hey Brendon,
I love your enthusiasm and creativity as well as a great job for putting it into words and out there! :) Writing a post like this and gaining feedback from the community seems to me to be a great first step for actually making progress on an important topic like this.
I have thought about ideas like this myself quite a lot as well and as someone experiencing funding constraints/difficulties myself I see it is a worthwhile cause to pursue (I might be biased, though ;) ).
I was also in the tech-talk and I would love to be kept in the loop on this as well as contribute where it makes sense. Maybe it makes sense to use on of the slack channels for more in-depth discussions or let’s set up a special interest group call around the topic! It may also make sense to start something like a git project and use the wiki features to integrate all the valuable ideas and feedback that start pouring in this thread. Short-term it might make sense to create a project plan and look for funding to make this happen in a sustainable way. I imagine Open Phil or EA Grants may actually be interested in something like this.
I have experience in web development as well as scientific approaches to solution development (e.g., https://en.wikipedia.org/wiki/Design_science_(methodology)). Moreover, I am working on the topic of knowledge management/integration in the context of communities which would likely be an important part of actually making this work.
Hey Alex, thanks for your message! Personal experience with funding constraints is one of the reasons that made me consider the current grantmaking space and whether/how it should be improved as well. I have just created the #ti-funding channel in the Rethink Charity Slack to foster greater collaboration and have added you to it.
Minor point:
Also CFAR?
Beyond the cognitive bias debugging material from CFAR, I can’t see any obvious substantial comparative advantages that the EA community has for improving the efficacy of existing individuals. In other words, it seems to me that the main ways the EA community can increase an individual’s impact is via (a) the onboarding process / introducing them to the research (“new individuals”) (b) helping them to find a good fit on a team with other EA-aligned individuals / introducing them to the community (“existing organisations”, “new organisations”). If you want to just generally become more effective as an individual, read a productivity bestseller, attend a reputable leadership course, find an older mentor in your field, see a shrink etc. Sometimes I think EAs forget that we’re not the best at everything ;-)
Thanks for mentioning CFAR, I had a feeling I was omitting organizations that didn’t come to mind or that I was not aware of. I have removed my statement about there being only one organization in the area of helping “existing individuals” because it is incorrect.
I agree that EAs may not have a comparative advantage in improving personal efficacy. My point is about funding and emphasizing these areas. For instance, perhaps hiring personal assistance for high impact EA direct workers should be much more funded than it is now (this is just a hypothetical example).
Shouldn’t organisations just hire personal assistance for their highest impact staff? If you have more examples in the realm of “improving the efficacy of ‘existing individuals’” I’d be interested to hear them.
One component of the EA Peer Funding network was enabling small grants between people for different purposes. For instance, this could have taken the form of healthcare expenses, educational expenses, etc. The “EA Hotel” is another example of trying to assist existing community members with living and food expenses so they can pursue things.
I’ve only skimmed this so far, but lots of good ideas, very timely (I was almost waiting for someone to post something like this with all the similar discussion that’s been happening recently, and I’m thinking mostly in conversation), and you’ve gone up in my estimations. Thank you. Upvoted.
However, I also want to say that I still think of you as being wildly overconfident in general. This is partly based on previous interactions with you, partly based on a fellow EA with a great deal more investment experience than me being initially excited by Antigravity Investments and then thinking there wasn’t much of substance there when he dug a bit deeper, partly because of the time you spent with Leverage Research (who I’ve also found to be generally wildly overconfident).
There’s probably not much more I feel I can say at this point, even psuedoanonymised, and I apologise to everyone for that (and also for not expressing concern to a large group of people when I saw that Antigravity had appeared to have taken over the EA Peer Funding Facebook group). Take this as it is: A vague, unsubstantiated claim from an anonymous-for-now member of the community.
But if people do start considering supporting Brendon to have control of such a grantmaking entity—distributed or otherwise—I’d encourage you to DM me and I’ll think about what further details I’m able to give. I’d also encourage more people at that point to give their impressions of Brendon’s level of epistemic humility (e.g. perhaps it has dramatically improved in recent years). He is no doubt an impressive, altruistic individual, but I personally would feel uncomfortable having him decide who gets to be an EA grantmaker.
Hi byanyothername,
I’m not sure who you are, but I appreciate the candid feedback. I would like to point out, however, that giving anonymous, discrediting feedback in a public setting is discouraging to the receiver and quite possibly harmful. I am not sure if anonymous, discrediting feedback is a useful community norm or not; I haven’t thought about this in much detail. Prior examples in the community appear to have individuals give public and non-anonymous feedback in very extreme cases with a tremendous amount of supporting information. Perhaps you can share additional thoughts about your choice to provide anonymous discrediting feedback with minimal information, as opposed to pursuing another course of action, such as privately discussing your concerns with me and updating your opinion of me based on what I share privately before offering to go around offering to share negative opinions about me without my say in the matter and as someone who barely knows me.
Your post makes me feel obligated to defend myself in order to prevent possible misconceptions from spreading. I get the sense that you are judging me based on a highly limited number of data points, and that you do not have a good sense of me as a person or what I’ve done. I believe judging people too quickly is generally considered a bad practice.
I will respond to your examples individually.
I have attempted to launch many early stage projects before. In order to make sure projects are useful, a large amount of feedback must be obtained, and projects must be presented in their worst possible state, without much validation and with shoddy, minimum viable product execution, by virtue of being early stage. Additionally, entrepreneurs have to express both high optimism to the public and themselves, while simultaneously trying to poke holes in the idea from every possible direction, for the purpose of maintaining self motivation and the motivation of team members, funders, etc while ensuring that what is being built is actually of value. Interactions with people working on early stage ventures could induce an impression of overconfidence, incompetence, or other potentially negative traits when viewed without any additional data points (for instance, “risks” are not part of standard pitch decks to VCs, and founders are instructed to generally act highly optimistic, but not to the point of deception of course).
You mention that you have found me wildly overconfident, but I do not think most/all the people who know me think that, so perhaps you are basing this off of one datapoint, maybe by hearing me speak about an early stage project in a promotional context. I have expressed very high uncertainty about cause areas, donating now vs later, the value of projects I’m working on, the value of projects other people are working on, AI timelines, and many other EA topics as well as pretty much every belief like “the value of taking alpha lipoic acid daily” to provide completely random examples I have thought of recently. My very limited experiments with self calibration in both casual and formal settings (like using PredictionBook.com) don’t indicate anything amiss to me. However, there is a probability that I may be overconfident in some ways, or appearing like that in certain contexts, so I can definitely get additional feedback regarding that from other EAs that know me well.
Regarding your comments about Antigravity Investments, I would imagine this person’s opinion was rather old, perhaps from over a year ago when I solicited some very early feedback from various EAs. Many of them thought some parts of the idea were not useful. This is a normal part of validating whether an idea and various features are a good idea; I would be shocked if 100% of people expressed high enthusiasm about 100% of the idea. There is pretty much always very high variance in feedback on any sort of idea, startup or not, although curing cancer at $1 per treatment will probably get pretty much universal enthusiasm. I hope that you are not implying or thinking that I ignore feedback and pursue projects blindly. As someone who has personally shut down a lot of things I’ve started myself, I take outside feedback very seriously. As entrepreneurs know, market feedback and market traction is everything, not the theoretical optimality of an idea. In fact, I terminated an idea similar to EA Funds solely based on one EA’s opinion that a web-based donor advised fund would experience nearly zero traction due to a lack of broad market demand and was thus not worth pursuing. There are now many startups and projects doing something very similar now, maybe over 5 separate teams in the EA community alone. The balance between weighing an internal model versus weighing external feedback on parts of that model can be challenging to balance in both directions.
Finally, I spent literally one week at Leverage Research many years ago as a 13–14 year old high school freshman doing an “internship” learning about creating strategic plans using the yEd graphing software. I have not had any contact with Leverage since then, have not worked on any Leverage Research projects at any point in my life, etc. Hopefully this is obvious, but I think claiming an association between a one week internship spent learning yEd as a young teenager and my current level of confidence is a little bit of a stretch.
No offense, but I would personally feel uncomfortable having you in the community warning about others in the community based on the quality of your analysis and your judgement in posting this, although I am again open to changing my opinion if people have thought about this and think this sort of thing is a good norm. And my opinion about you is also based on a very limited data point, which may not indicate much about your interpersonal demeanor, personality traits, overall judgement, etc.
Ah this is awesome. Thank you. And sorry.
The anonymous thing is mostly me realising that I hardly ever criticise people, wanting to practice, but knowing I’m going to make a ton of mistakes as I’m kinda new to this! I refrain from criticising people out of fear, so I thought I’d hide a bit under a cloak of anonymity until I get more skilled at this (also criticism is a particularly emotional thing so I don’t want to unfairly tarnish my reputation after a few early mistakes).
Sorry again for the initial upset this probably caused. Fortunately, I’m pretty sure the community’s on your side (I mean, I am, for starters!)
I found this baffling. Rough analogy: “I hard ever punch people, so I thought I’d practise on you”. You should criticise people if and when they merit criticism, not because you want to practise. I would have expected you caused a great deal of upset to Brendon (this would have upset me greatly), which, for the question benefit of ‘practising criticism’ does not seem justified. I urge you to refrain from this sort of thing in future. If you want to improve in a safer way, I suggest you write up your criticisms and then show them to someone else to collect feedback before deciding whether or not to post them.
Oh man, that’s not what I meant, sorry! I wasn’t deliberately overdoing it for practice (and I’ve generally been much more critical on here than I am in person, I haven’t singled out Brendon). I have doubts about people’s reasoning in my mind all the time, but it’s very rare that I say them out loud, and thereby give others the chance to learn from them, present evidence to the contrary or say how they think I’m being irrational. I was just trying to express my doubts out loud the way other people seem to, but I knew I’d make some mistakes and I really did fuck up.
Don’t worry, I’ve given up on the idea. I’ll shut down my account if I can (struggling to find the option right now), and I don’t plan on starting any more.
Sorry again.
Maybe just send the feedback privately next time?
FYI I spent five karma points to say this to you, so you better take it seriously.
Update: I’ve now read your post more thoroughly. I love the proposal.