I’d add that I think there’s something to be said in favor of a needs-based model in the early stages of a startup. For as long as you’re heavily funding-constrained, it allows you to hire a greater number of people at a given cost. (This is essentially first-degree price discrimination; maximizing producer’s surplus (≈ altruistic utility) can IMO be a good idea under some circumstances.) One could argue that even then, promising EA startups should (and will) be paid better, but I’m not sure this always works out in practice.
Other than that, I agree.
One thing I like about offsetting is that it creates a more cooperative and inclusive EA community. I.e., animal advocates might be put off less by meat-eating EAs if they learn they offset their consumption, or poverty reducers might be less concerned about long-termists making policy recommendations that (perhaps as a side effect) slow down AI progress (and thereby the escape from global poverty) if they also support some poverty interventions (especially when doing so is particularly cheap for them). In general, there seem to be significant gains from cooperation, and given repeated interaction, it’s fairly easy to actually move towards such outcomes, including by starting to cooperate unilaterally.
Of course, this is best achieved not through offsetting, but by thinking about who we will want to cooperate with and trying to help their values as cost-effectively as possible.
This piece does a good job at making this point: https://www.givingwhatwecan.org/post/2015/06/cost-fighting-malaria-malnutrition-neglected-tropical-diseases-and-hivaids-and/
It has! It was successful, both in terms of participant satisfaction and our own assessment of research progress/ideas.
Haha, interesting, thanks! :)
What you’re saying matches my personal experience.
I was wondering whether there might be a discrepancy among those who stop attending – any thoughts on that?
ETA: This seems more plausible to me than a general tendency because women and men in EA have already self-selected based on being interested in these ideas. Though maybe something similar could be said of people who attended once but stopped.
Thanks a lot for this excellent piece, really appreciate it.
I’ve been wondering for some time whether differences in interests (if they exist) might contribute to the gender imbalance in EA, and if so, what lessons we might draw from that. Maybe focus groups (especially with women and men who stopped attending events) could contribute to answering this question.
For poverty-oriented interventions, have you considered less measurable, more hits-based, more growth-focused ideas? I’m thinking of opportunities that might have a chance of replicating something like China’s escape from extreme poverty in other countries.
A few ideas for where you might start if you tried to look into this more:
(Please let me know if you found this interesting / helpful – I might write a brief EA forum post about this at some point.)
There already is a basic vetting process; I’d mostly welcome fairly gradual improvements to lower downside risk. (I think my initial comment sounded more like the bar should be fairly high, similar to that of, e.g., the LTFF. This is not what I intended to say; I think it should still be considerably lower.)
I think even just explicitly saying something like “we welcome criticism of high-status people or institutions” would go a long way for both shaping people’s perception of the vetting process and shaping the vetters’ approach.
That said, your arguments did update me in the direction “small changes to the vetting process seem better than large changes.”
Interesting! I agree with the points you make, but I was hoping that good vetting wouldn’t suffer from these problems.
CEA’s semi-internal media advice contains some valuable lessons. I was going to post a write-up on the EA Forum at some point, but given that media attention has been de-emphasized as an EA priority since, I decided against pursuing that (I also have some old “EA media strategy” presentation slides but unfortunately, they’re in German). If lots of people thought this would be valuable, or if we learned that EA-Hotel-type issues occur on a regular basis, I’d consider it, though. (I also think much of my experience is only relevant to global poverty and animal welfare, not to AI or other cause areas.)
A point I’d personally want to add to Habryka’s list: I’m currently unsure whether there is sufficiently good vetting of guests. Since the EA Hotel provides valuable services (almost) for free, it kind of acts as a de facto grantmaker, and runs the risk of funding people who are accidentally doing harm. There are reasons to think that harmful projects will be overrepresented in the application pool (Habryka also made some similar points). As I understand it, the EA Hotel is currently improving their vetting, which I think will be a step in the right direction, and could potentially resolve this issue.
My impression of how the EA Hotel crew dealt with media attention was something like “better than many did in the early stages (including myself in the early stages of EAF) but (due to lack of experience or training) considerably worse than most EA orgs would do these days.” There are many counterintuitive lessons to be learnt, many of which I still don’t fully understand, either.
However, since the initial media interest has abated, I think this isn’t really relevant for current grants anyway.
This is great! I think it could be worth emphasizing more that you’re essentially making a linkpost for the FHI / GovAI technical report Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development.
I tend to think that the network constraints are better addressed by solutions other than ad-hoc fixes (such as more proactive investigations of grantees), though I agree it’s a concern and it updates me a bit towards this not being a good idea.
I wasn’t suggesting deciding the opportunity cost case by case. Instead, grant evaluators could assume a fixed cost of e.g. $2k. In terms of estimating the benefit of making the grant, I think they do that already to some extent by providing numerical ratings to grants (as Oliver explains here). Also, being aware of the $10k rule already creates a small amount of work. Overall, I think the additional amount of work seems negligibly small.
ETA: Setting a lower threshold would allow us to a) avoid turning down promising grants, and b) remove an incentive to ask for too much money. That seems pretty useful to me.
I actually think the $10k grant threshold doesn’t make a lot of sense even if we assume the details of this “opportunity cost” perspective are correct. Grants should fulfill the following criterion:
“Benefit of making the grant” ≥ “Financial cost of grant” + “CEA’s opportunity cost from distributing a grant”
If we assume that there are large impact differences between different opportunities, as EAs generally do, a $5k grant could easily have a benefit worth $50k to the EA community, and therefore easily be worth the $2k of opportunity cost to CEA. (A potential justification of the $10k threshold could argue in terms of some sort of “market efficiency” of grantmaking opportunities, but I think this would only justify a rigid threshold of ~$2k.)
IMO, a more desirable solution would be to have the EA Fund committees factor in the opportunity cost of making a grant on a case-by-case basis, rather than having a rigid “$10k” rule. Since EA Fund committees generally consist of smart people, I think they’d be able to understand and implement this well.
(moved this comment here)
I agree. The changes you’re making seem great! I also like the concise description.
(Will get back on some of the details via email, e.g., not sure 95% CIs are worth the effort.)
On #3, this goes in a similar direction.