This is great! I think it could be worth emphasizing more that you’re essentially making a linkpost for the FHI / GovAI technical report Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
(moved this comment here)
I agree. The changes you’re making seem great! I also like the concise description.
(Will get back on some of the details via email, e.g., not sure 95% CIs are worth the effort.)
On #3, this goes in a similar direction.
Do you think intentional insights did a lot of damage? I’d say it was recognized by the community and pretty well handled whole doing almost no damage.
As I also say in my above-linked talk, if we think that EA is constrained by vetting and by senior staff time, things like InIn have a very significant opportunity cost because they tend to take up a lot of time from senior EAs. To get a sense of this, just have a look at how long and thorough Jeff Kaufman’s post is, and how many people gave input/feedback—I’d guess that’s several weeks of work by senior staff that could otherwise go towards resolving important bottlenecks in EA. On top of that, I’d guess there was a lot of internal discussion in several EA orgs about how to handle this case. So I’d say this is a good example of how a single person can have a lot of negative impact that affects a lot of people.
I wasn’t asking for examples from EA, just the type of projects we’d expect from EAs.
The above-linked 80k article and EAG talk mention a lot of potential examples. I’m not sure what else you were hoping for? I also gave a concise (but not complete) overview in this facebook comment.
Also relevant: Speeding up social science 10-fold, how to do research that’s actually useful, & why plenty of startups cause harm (and Spencer’s blog post)
Spencer and Rob think plenty of startups are actually harmful for society. Spencer explains how companies regularly cause harm by injuring their customers or third parties, or by drawing people and investment away from more useful projects.
So EAs should be more cautious even about ordinary start-ups in comparison to the typical Venture Capitalist.
A few examples are mentioned in the resources linked above. The most well-known and commonly accepted one is Intentional Insights, but I think there are quite a few more.
I generally prefer not to make negative public statements about well-intentioned EA projects. I think this is probably the reason why the examples might not be salient to everyone.
EA Forum posts are often hard to find through Google search
E.g., when searching on Google for “Long Term Future Fund,” this result only shows up at the bottom of the second page of search results, even after many user profile pages that seem much less relevant. Maybe this something that can be fixed in some ways?
I guess the solution would be to use the forum’s own search feature, but many people (including myself until just now) don’t routinely use that feature and prefer using Google search.
Another example: Searching on Google for “random funding max daniel effective altruism forum” doesn’t lead to this result, you have to search for “random funding max_daniel effective altruism forum” instead. I wasn’t aware that Google responds to the “underline” character, and it seems strange that it’s so sensitive to it.
Maybe this is all Google’s mistake, but I find this a bit hard to believe.
Suppose that, on some level of general competence, Alice is 95th percentile among EAs on the Forum and is working on her own EA project independently, while Bob is of 30th percentile competence and is working on his project while socially immersed in his many in-person EA contacts.
I agree being immersed is important because risks are hard to anticipate for a single individual. I would argue that the scenario seems somewhat artificial, as general competence of someone not interacting with EAs is unlikely to be in the 95th percentile.
However, once these domain-specific pitfalls are pointed out to you, it’s not that cognitively taxing to grok them and adjust your thinking/actions accordingly.
I agree. However, this is not really about skill or intelligence: Humans in general often don’t take critical feedback nearly as seriously as they should, and often don’t adjust their thinking/actions due to sunk costs, wanting to save face in their peer group, grandiose personality, etc. This also applies to EAs (maybe somewhat but not vastly less so).
From looking at the published list of EA Hotel residents, I tentatively think some people’s work might come with high downside risk, while others have high upside potential and seem worth supporting. I’m not sure how this balances out. Discussing individual projects in public seems difficult, which is maybe part of the reason why people find the arguments against funding the EA Hotel unconvincing. All else equal, I’d probably prefer something like Aaron Gertler’s approach of “looking at the Hotel’s guest list, picking the best-sounding project, and offering money directly to the person behind it.” I have also shared some thoughts for how to design the admission process with EA Hotel staff.
(If one accepted the premise that downside risk is prevalent and significant, one could argue that any donation to the EA Hotel that doesn’t set incentives to reduce downside risk might counterfactually replace a donation that does. I’m not sure this argument works, but it could be worth thinking about.)
(All my personal opinion, not speaking for anyone here.)
Edited to add: In many ways, the EA Hotel acts like a de facto EA grantmaker, so the concerns outlined in my comment here apply:
When long-termist grant applications don’t get funded, the reason usually isn’t lack of funding, but one of the following:
--The grantmaker was unable to vet the project (due to time constraints or lack of domain expertise) or at least thought it was a better fit for a different grantmaker.
--The grantmaker thought the project came with a high risk of accidental harm.
High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them.
Relevant: When should EAs allocate funding randomly? An inconclusive literature review.
Thanks for the thorough response! I think I agree with what you said, and I think the process you mentioned seems adequate to address the risks (if implemented well).
My perception is that application sharing could help address vetting constraints because it allows other funders (who may have more expertise in a particular area) to help with vetting. I think other funders probably don’t have rolling applications because of the increased effort this entails, so in that sense rolling applications can also help resolve vetting constraints.
This post seems very insightful to me, and it seems like it worked out very well in terms of upvotes (and it seems like it would increase your chances of getting funding)? I’d be interested to learn who wrote this, but of course no need to say if you prefer not to. :)
The grantmaker was unable to vet the project (due to time constraints or lack of domain expertise) or at least thought it was a better fit for a different grantmaker.
The grantmaker thought the project came with a high risk of accidental harm.
This post contains great ideas for resolving the former point, but doesn’t demonstrate high awareness of the latter concern. Awareness of these risks seems important to me, especially for funders: High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them. I’d be interested in your thoughts on that.
I really like that you’re providing feedback to applicants! In general, I wish the EA community was more proactive with providing critical feedback.
I would ask Open Phil whether they’d be okay with you sharing it with the organization to your applying to (ideally only once you’re past the first stage, and only if the other organization expressed interest).
Good point, agree it depends on the type of work.
Two points that speak against this view a bit:
It seems easier to increase the efficiency of your work than the quality. All else equal, I’m tentatively more interested in people who can do very high-quality work inefficiently than people doing mediocre work quickly – because I expect that the the former are more likely to eventually do high-quality, highly efficient work.
Some people tend to get very nervous with timed tests and mess up badly; it seems good to give them the opportunity to prove themselves in a less stressful environment.
My current view is to ask for both timed and untimed tests, and make the untimed tests very simple/short (such that you could complete it in 20 minutes if you had to and there’s very little benefit to spending >2h on it).
I think you’re correct, but my impression is that most EA orgs will happily agree to this.
One quick recommendation for what applicants can do that might be useful for employers (speaking from an employer’s perspective): Proactively share previous work trials. Whenever applicants did this, this provided me with additional valuable information that helped me decide whether someone should advance to the next stage.