I run Antigravity Investments, an EA-aligned investing firm that helps EA nonprofits and individuals with investing. Our EA Forum article with public recommendations is mostly focused on cash management, although Appendix B discusses higher-EV investing options.
We give free advice and typically charge a low fee for directly managing portfolios. Feel free to reach out at firstname.lastname@example.org.
For a DIY approach in the United States, we recommend a portfolio with low-fee ETFs. A DIY approach with ETFs makes it possible to donate investments that have gone up in value without paying any taxes, which is an optimal way to donate.
Thanks for asking! At this time we do not specifically limit the types of early-stage high-impact activities that can apply. Early-stage nonprofits, for-profits, and personal projects would all fall under the scope of acceptable activity types.
The following arguments are ideas and have not been thoroughly researched. They may not reflect my actual views. Counterarguments are not mentioned because the OP is “mainly interested in seeing critiques.” I may post counterarguments after the reward deadline has passed.
Claim to argue against: “$172,000 to the EA Hotel has at least as much EV as $172,000 distributed randomly to grantees from EA Meta Fund grantees or EA grants grantees.”
Argument 1: The EA Hotel has a low counterfactually-adjusted impact
In this post, the EA Hotel states:
Out of 19 residents, 15 would be doing the same work counterfactually, but the hotel allows them to do, on average, 2.2 times more EA work—as opposed to working a part time job to self-fund, or burning more runway.
This datapoint supports the view that most EA Hotel residents would be doing the same work whether or not they stay at the hotel. The claim that “the hotel allows them to do, on average, 2.2 times more EA work” could be incorrect. To gain more certainty about this, the EA Hotel should track what residents that are not accepted actually end up doing instead.
EA Hotel residents have many options to consider to do the same work while not staying at the hotel. For example, depending on the time and location requirements of the work, they could do some combination of: (1) part-time work to finance their living expenses, (2) living with parents, friends, or another location with near-zero living expenses, or (3) living in very low-cost housing that resembles the cost of the EA Hotel.
If someone pursues option (2), the EA Hotel is negative EV because someone can choose a free option instead of the EA Hotel, which consumes community funds.
If someone pursues options (1) and (3), they might only have to work a very limited amount of time. For example, I believe I recently heard of someone that was able to find a one bedroom living arrangement in Berkeley, CA in a large house for $500 a month, although they have to share a bathroom with many people. So someone might only need to do paid work 25% of the time and can do EA work 75% of the time. This suggests that the “2.2 times more EA work” figure greatly overstates the benefit of the EA Hotel in terms of reducing living expenses. Pursing options (1) and (3) seems to be feasible for the vast majority of people.
If direct funding allows people to pursue option (3) and secure low-cost housing, and if the cost is around the same as the EA Hotel, there may be no need for the EA Hotel itself to exist. The question becomes what is the counterfactually-adjusted impact of funding living expenses at the EA Hotel compared to option (3)? Adjustments should be made for things like missing out on the benefits of living elsewhere than Blackpool as well as relocation time and expenses which would further reduce counterfactual impact. The EA Hotel community certainly provides benefits, although coworking out of REACH may provide similar benefits.
Argument 2: The EA Hotel should charge users directly instead of raising funding
Rather than fundraising from EAs, the hotel should try to directly charge people who are benefiting from their services and community, which is an argument against donating to the hotel.
There doesn’t seem to be a need to fund people who can afford the hotel. It’s not clear what proportion of people fall under this category, but considering that it only takes 13 weeks of work at $15/hour to pay $7,900 for a one year stay at the hotel, it is possible that majority of residents can already afford to stay at the hotel.
For people who cannot afford the EA Hotel, applicants to funding organizations like EA Grants can include that they are requesting funding for living expenses and indicate EA Hotel expenses as part of their requested grant funding. EA Grants evaluators and other funders may be better equipped to evaluate the EV of projects people are working on as opposed to EA Hotel staff. If EA Grants can already cover this, there is no need to donate to the EA Hotel.
Argument 3: Funding projects has a higher impact than funding living expenses
I assume that EA Grants funds applicants’ project expenses as well as their personal salary and living expenses. This could be higher impact than solely funding living expenses. Working at the EA Hotel with an unfunded project may be quite unproductive, particularly if the project requires funding to get anywhere. Seeking early-stage EA project funding seems to require waiting for long periods of time (perhaps months) for funders to get back to you rather than working full-time trying to acquire funding.
Argument 4: People should not donate to the EA Hotel until they improve their impact metrics and reporting
The EV estimation for the EA Hotel is highly mathematical and commenters have expressed that it is difficult to follow. Actual impact reporting appears to consist of testimonials which are hard to evaluate. It’s even trickier to evaluate the counterfactually-adjusted impact.
There is probably a nontrivial number of people who do not seek support due to the presence of a fee, even if they can theoretically afford it (see trivial inconveniences). Unfortunately, I’ve seen this happen in practice.
The potential downside (and upside) of diversifying by adding some tilts and consistently sticking with them is limited, so I don’t see a major problem with “non-advanced investors” following the advice. Investors should be aware of things like rebalancing and capital gains tax; perhaps “intermediate investor” is a better term.
It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance. It also requires knowledge and time to determine how to appropriately incorporate factors into a portfolio and how to adjust exposure over time. For example, what should be done if a factor underperforms the market for a noticeable period of time? An investor needs to decide whether to reduce or eliminate exposure to a factor or not. Holding an investment that will continue to underperform is bad, but selling an investment that is experiencing cyclical underperformance is a bad timing decision which will worsen performance each time such an error is made.
As a concrete example, the momentum factor has had notable crashes throughout history that could cause concern and uncertainty among investors that were not expecting that behavior. Decisions to add factors to portfolios need to take into account maintaining an appropriate level of diversification, tax concerns (selling a factor fund could incur capital gains taxes, and factor mutual funds will pass capital gains the fund incurs while following factors onto investors whereas factor ETFs almost definitely won’t), and the impact of fees, among other considerations.
This post was intended as a grant application announcement post that also happened to contain some information about new funder-friendly and applicant-friendly policies we are adopting. I did not include any information about our evaluation process or risk reduction process in the body of the post, so I would not expect the post to convey high awareness of either reasons why long-termist applications don’t get funded.
I am curious what ideas we included you think address your first point about grantmakers being unable to vet the project. I’m not sure if application sharing, rolling applications, or providing feedback to grant applicants address your first or second points.
To elaborate more on risk, I wrote in another comment on this post that:
We have several layers of checks to help reduce risks and improve grant decision making including initial staff review of incoming applications, angels sharing their evaluations with one another and talking with external contacts/experts if appropriate, and hearing opinions of external grantmakers on grant applications we have received (we still need to talk with grantmakers to set this up).
I think that an initial staff review can help detect risks, and if we notice a large problem with downside risk in incoming projects, we can enhance the initial staff review process. The angel evaluation period is where a lot of nuanced considerations about risk can come up, since angels can share their perspectives on a grant proposal with other angels and external experts, and we have angels with significant experience in areas like meta and AI. Finally, this wasn’t mentioned in the post, but we are aiming to share evaluations both ways with funders in EA. I think this can go a long way towards making all funders aware of all of the potential risks of a project.
Angels in the group seem to actively avoid funding projects that they feel they are not qualified to evaluate. Angels can point out funding behavior that they perceive is risky from other angels, although from what I’ve seen, our angels lean more on the side of risk avoidance than anything else.
High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them. I’d be interested in your thoughts on that.
As Denise mentioned in a post on Jan’s project evaluation idea, there is a category of project that is “projects which are simply bad because they do have approximately zero impact, but aren’t particularly risky. I think this category is the largest of the the four.” This lines up with many of the applications I am seeing. This might be different with long-term/x-risk projects specifically, but since we are a general funding group with individual EA funders with a wide variety of backgrounds and experiences, we are not receiving a large number of such applications relative to the entire pool of applications.
Therefore, I wouldn’t say that our applications are likely to be “skewed towards high-downside-risk projects.” I expect to continue to receive a large number of projects that may have very low impact just like other funders are likely receiving. As Oliver mentioned, “in practice I think people will have models that will output a net-positive impact or a net-negative impact, depending on certain facts that they have uncertainty about, and understanding those cruxes and uncertainties is the key thing in understanding whether a project will be worth working on.” I think that other EA funders will fund projects that match the model of the funders, but because people’s models differ wildly and are very likely wrong in many cases due to the high failure rate of funded startups for the most successful VCs, I don’t know if other funders are actually funding a significant fraction of the opportunities that end up having the highest impact.
To my understanding EA Grants is the only other funder that is funding general grants, with BERI Grants and EAF Fund focusing on long-term projects exclusively, and the EA Funds focusing on their respective areas and funding larger organizations as well. Since EA Grants is currently closed for applications (I support rolling applications rather than application rounds), we are receiving applications that have not been funded by other funders because the only other funder isn’t accepting applications right now. Since I support funder application sharing, with this method funders will be able to see the entire pool of proposals, rather than the pool without the projects other funders have funded. This will help each funder evaluate the quality of the projects they are funding relative to the quality of other projects that other funders have funded.
I really like that you’re providing feedback to applicants! In general, I wish the EA community was more proactive with providing critical feedback.
Thanks! I completely agree.
I think it is fair to say you expected very low risk from creating an open platform where people would just post projects and seek volunteers and funding, while I expected with minimum curation this creates significant risk (even if the risk is coming from small fraction of projects). Sorry if I rounded off suggestions like “let’s make an open platform without careful evaluation and see” and “based on the project ideas lists which existed several years ago the amount of harmful projects seems low” to “worrying about them is premature”.
The community has already had many instances of openly writing about ideas, seeking funding on the EA Forum, Patreon, and elsewhere, and posting projects in places like the .impact hackpad and the currently active EA Work Club. Since posting about projects and making them known to community members seems to be a norm, I am curious about your assessment of the risk and what, if anything, can be done about it.
Do you propose that all EA project leaders seek approval from a central evaluation committee or something before talking with others about and publicizing the existence of their project? This would highly concern me because I think it’s very challenging to predict the outcomes of a project, which is evidenced by the fact that people have wildly different opinions on how good of an idea or how good of a startup something is. Such a system could be very negative EV by greatly reducing the number of projects being pursued by providing initial negative feedback that doesn’t reflect how the project would have turned out or decreasing the success of projects because other people are afraid to support a project that did not get backing from an evaluation system. I expect significant inaccuracy from my own project evaluation system as well as the project evaluation systems of other people and evaluation groups.
Thanks—both of that happened after I posted my comment, and also I still do not see the numbers which would help me estimate the ratio of projects which applied and which got funded. I take as mildly negative signal that someone had to ask, and this info was not included in the post, which solicits project proposals and volunteer work.
In my model it seems possible you have something like chicken-and-egg problem, not getting many great proposals, and the group of unnamed angels not funding many proposals coming via that pipeline.
If this is the case and the actual number of successfully funded projects is low, I think it is necessary to state this clearly before inviting people to work on proposals. My vague impression was we may disagree on this, which seems to indicate some quite deep disagreement about how funders should treat projects.
I wrote about the chicken and the egg problem here. As noted in my comments on the announcement post, the angels have significant amounts of funding available. Other funders do not disclose some of these statistics, and while we may do so in the future, I do not think it is necessary before soliciting proposals. The time cost of applying is pretty low, particularly if people are recycling content they have already written. I think we are the first grantmaking group to give all applicants feedback on their application which I think is valuable even if people do not get funded.
The whole context was, Ryan suggested I should have sought some feedback from you. I actually did that, and your co-founder noted that he will try to write the feedback on this today or tomorrow, on 11th of Mar—which did not happen. I don’t think this is large problem, as we had already discussed the topic extensively.
Ben commented on your Google Document that was seeking feedback. I wouldn’t say we’ve discussed the topic “extensively” in the brief call that we had. The devil is in the details, as they say.
John Maxwell brought up some interesting points. He suggests that platforms can experience the chicken and egg problem when it comes to getting started, and that intensive networking is a way to overcome this issue. I agree that platforms often have this problem, but the EA Angel Group resolved this not by networking intensely but instead by offering a lot of value to angels. This would incentivize them to join the platform even without a large number of existing grant applicants which would in turn incentivize grant applicants to apply.
Of course, we do need a stream of incoming grant applications to remain viable, and unfortunately we encountered some unexpected issues when attempting to collaborate with EA Grants and speak to many community members as part of several strategies to acquire grant applications. As mentioned in my progress update comment, I am currently pursuing alternate strategies to achieve this objective which involve steps that I have greater control over (and less steps that require the approval of entities whose decisions I cannot influence). That being said, I think networking and collaboration is highly valuable, and am scaling that up even as I pursue strategies that do not require networking to succeed.
I wrote a progress update comment regarding the EA Angel Group which covered our grant opportunity discovery activities over the last few months. We spoke with EA Grants several months ago, and to the best of my knowledge they are still determining whether to send and receive grant applications with other funders. At least one major funding group has expressed significant interest in sending and receiving grant applications with the EA Angel Group, and we are in the process of talking with various funders about this.
I mentioned the one concern I heard and my response to it in my progress update comment:
One objection to sharing grant applications among funders is that a funder would fund all of the grant proposals they felt were good and classify all other grant proposals as not suitable to be funded. From the funder’s perspective, sharing the unfunded grant proposals would be bad since other organizations could subsequently fund them, and the funder classified those grant proposals as not worth funding. I personally disagree with this objection because the argument assumes that a funder has developed a grant evaluation process that can actually identify successful projects with a high degree of accuracy. Since the norm in the for-profit world involves large and successful venture capital firms with lots of experienced domain experts regularly passing on opportunities that later become multibillion-dollar companies, I find it unlikely that any EA funding organization will develop a grant evaluation process that is so good it justifies hiding some or all unfunded applications.
Can you elaborate on:
I think for example that a ‘just-another-universal-protocol’ worry would be very reasonable to have here.
Are you suggesting that funders may be concerned about adopting a protocol which ends up providing limited value? As I’ve stated in several other comments, I think sharing grant applications can be of considerable value since arbitrarily limiting the pool of projects seems pretty suboptimal.
To avoid that I think we need to do the hard work of reaching out to involved parties and have many conversations to incorporate their most important considerations and start mutually useful collaborations. I.e. consensus building.
I agree. I did some initial outreach at first and will begin additional outreach shortly.
Thanks for pointing that out! Jan and I have also talked outside the EA Forum about our opinions on risk in the EA project space. I’ve been more optimistic about the prevalence of negative EV projects, so I thought there was a chance that greater optimism was being misinterpreted as a lack of concern about negative EV projects, which isn’t my position.
We had some discussion with Brendon, and I think his opinion can be rounded to “there are almost no bad projects, so to worry about them is premature”. I disagree with that.
I do not think your interpretation of my opinion on bad projects in EA is aligned with what I actually believe. In fact, I actually stated my opinion in writing in a response to you two days ago which seems to deviate highly from your interpretation of my opinion.
I never said that there are “almost no bad projects.” I specifically said I don’t think that “many immediately obvious negative EV projects exist.” My main point was that my observations of EA projects in the entire EA space over the last five years do not line up with a lot of clearly harmful projects floating around. This does not preclude the possibility of large numbers of non-obviously bad projects existing, or small numbers of obviously bad projects existing.
I also never stated anything remotely similar to “to worry about [bad projects] is premature.” In fact, my comment said that the EA Angel Group helps prevent the “risk of one funder making a mistake and not seeking additional evaluations from others before funding something” because there is “an initial staff review of projects followed by funders sharing their evaluations of projects with each other to eliminate the possibility of one funder funding something while not being aware of the opinion of other funders.”
I believe that being attentive to the risks of projects is important, and I also stated in my comment that risk awareness could be of even higher importance when it comes to projects that seek to impact x-risks/the long-term future, which I believe is your perspective as well.
Also, given the Brendon’s angel group is working, evaluating and funding projects since October, I would be curious what projects were funded, what was the total amount of funding allocated, how many applications they got.
Milan asked this question and I answered it.
Based on what I know I’m unconvinced that Brendon or BERI should have some outsized influence how evaluations should be done; part of the point of the platform would be to serve broader community.
I’m not entirely sure what your reasons are for having this opinion, or what you even mean. I am also not exactly sure what you define as an “evaluation.” I am interpreting evaluations to mean all of the assessments of projects happening in the EA community from funders or somewhat structured groups designed to do evaluations.
I can’t speak for BERI, but I currently have no influence on how evaluations should be done, and I also currently have no interest in influencing how evaluations should be done. My view on evaluations seems to align with Oliver Habryka’s view that “in practice I think people will have models that will output a net-positive impact or a net-negative impact, depending on certain facts that they have uncertainty about, and understanding those cruxes and uncertainties is the key thing in understanding whether a project will be worth working on.” I too believe this is how things work in practice, and evaluation processes seem to involve one or more people, ideally with diverse views and backgrounds, evaluate a project, sometimes with a more formalized evaluation framework taking certain factors into account. Then, a decision is made, and the process repeats at various funding entities. Perhaps this could be optimized by having argument maps or a process that involves more clearly laying out assumptions and assigning mathematical weights to them, but I currently have no plans to try to go to EA funders and suggest they all follow the same evaluation protocol. Highly successful for-profit VCs employ a variety of evaluation models and have not converged on a single evaluation method. This suggests that perhaps evaluators in EA should use different evaluation protocols since different protocols might be more or less effective with certain cause areas, circumstances, types of projects, etc.
That is correct! The EA Angel Group is designed to help individual funders who are already making grants with discovering more opportunities and hearing from other funders about possible benefits and risks of individual funding opportunities. Many people in the angel group have been heavily involved with the EA community for many years and have a history of making successful grants. Analogous to a for-profit angel group, we do not force angels to do everything through our group, we just seek to add value in terms of helping people fund better opportunities through improving opportunity discovery, evaluation, and funding processes.
Thanks for the suggestion Remmelt! I just added your primary wording recommendation to the post.
According to information that we requested from angels around our launch in October 2018, our individual funders had ~$600,000 in available capital to make early-stage grants for the remainder of 2018. Angels have been making grants during the time the group has been operating, although I am not sure of the exact volume aside from the fact that one angel recently made a grant of ~$25,000 to a project.
I am not sure of the exact volume because angels have not made a grant through a project that has submitted our grant application form yet. This is because we had lower than expected grant application volume since we were unexpectedly delayed for many months pursuing grant sharing with EA funders and trying to launch the EA Project Platform rather than doing a public call for applications and working with volunteers to source evaluations. We are now switching to doing public requests for proposals and active grant opportunity sourcing which I expect will significantly increase the number of grant opportunities we can present to angels. We are continuing to talk with EA funders about grant sharing, and one major funder just expressed an interest in sharing grant applications, so things may be moving forward on that front.
Bringing up that possible concern is a good point Remmelt! My paragraph was specifically suggesting that established EA funders should share applications with one another. As I mentioned in my comment to Ruth, if the application systems of 5 funders capture equal fractions of all projects in existence, each funder would only be able to make funding decisions with a pool of projects that is 1⁄5 the size of the total number of opportunities. Arbitrarily limiting the pool of projects to evaluate seems clearly suboptimal.
I agree that people may be concerned about inexperienced EA funders making unwise funding decisions. People with that concern should actually be supporting the EA Angel Group, because if they had read our introductory article or my recent comment about this they may have realized that:
the EA Angel Group [has] an initial staff review of projects followed by funders sharing their evaluations of projects with each other to eliminate the possibility of one funder funding something while not being aware of the opinion of other funders.
We help individual funders of all experience levels avoid issues like the unilateralist’s curse by benefiting from the perspectives of other funders. Funders can point out potential risks or downsides of a project and strongly warn each other against funding a project that appears to have a material chance of causing significant harm.
But that’s just a guess and I don’t really know. I do share in the sentiment that the option to downvote something is too easy for people who pattern-match abstract EA ideas like that, instead of putting in the somewhat strenuous and vulnerable work of sharing their impressions and asking further in the comment section about how the platform concretely works.
It is unfortunate that people may be downvoting without engaging in what is actually being proposed. I think that asking good questions or commenting is far better for everyone involved than giving a strong downvote based on a quick impression (possibly wiping out several standard upvotes) and leaving.
@Brendon, I thought you tried to address possible risks of a making applications available online in a previous post.
That is correct, I wrote about that in my post about the EA Projects Platform, which I recently mentioned has been indefinitely delayed. The EA Angel Group does not and was not designed to make projects available online.
How do you think right now about how to address funder blindspots in built-up knowledge and evaluation frameworks – for both established EA grantmakers and new venture capitalist-style funders (who might have valuable for-profit start-up experience to build on)?
I don’t have a readily prepared analysis of addressing funder blindspots. Something that might be helpful in reducing that would be having funders share evaluations with one another, so that if one funder recognizes a potential risk that is hard to detect, other funders can factor it into consideration as well. To prevent groupthink, funders should use a process where they conduct an initial or full evaluation before seeing what other funders think about a proposal.
Can you elaborate on what a “new venture capitalist-style funder” is? I’m not sure what this refers to, I believe the EA early-stage funding space is currently made of small number of entities like EA Grants and BERI grants and a larger number of individual donors.
Thanks for sharing your thoughts Ruth! I agree, I was surprised both by the negative votes and also by the lack of comments, particularly since our original article announcing the EA Angel Group was received quite positively. I linked to the EA Forum Post introducing the EA Angel Group at the beginning of the article. I felt that if people had thoughts or concerns with the idea of the angel group they could comment or vote on the original angel group article, but the article had no new votes or comments.
Regarding the many different funding systems and separate application forms that currently exist across EA, I wholeheartedly agree with your perspective. Simplifying a bit, if we assume there are 3 EA funders and 15 EA projects and each funder’s application captures an equal fraction of all projects, each funder can only make funding decisions from their pool of 5 projects rather than the 15 projects that exist. Choosing the best projects to fund out of a smaller set of projects that are randomly selected out of a larger pool seems clearly suboptimal.
Our main objective is to fund the highest impact projects regardless of form. We have historically received several applications from EAs working on projects that are structured as for-profit entities.
I feel similarly! I like your idea of a prompt before downvoting new users. Perhaps in general there could be a message with no user action required that appears whenever a downvote is made to encourage people to downvote with an explanatory comment in the event the reason for downvoting isn’t obvious (i.e. it hasn’t already been expressed in a comment).