Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
On what grounds do you expect EAs to have better personal ability?
Something I’ve been idly concerned about in the past is the possibility that EAs might be systematically more ambitious than equivalently competent people, and thus at a given level of ambition, EAs would be systematically less competent. I don’t have a huge amount of evidence for this being borne out in practice, but it’s one not-so-implausible way that EA charity founders might be worse than average at the skills needed to found charities.
I thought your “dedicated and focused” section would be arguing that EAs would work harder, but it actually seems to carry on from your “value alignment” section. I’d suggest these should be combined into one section to avoid confusion.
A couple of comments:
Could you state what your role and involvement is with various charities, and what those charities do, to provide some context? E.g. you mention helping fortify health but I’m not super familiar with what they do or how you helped them.
Reading this, a worry I had is that new charities founded by would often by competing for the same pot of money from EA orgs and/or individual EAs. Do you think this is likely to be a problem? It seems the success of this strategy relies on Open Phil do a lot of the funding. If new EA charities instead raise money from ineffective charities (possible), or raise money from people who would not have donated (not that likely) then this isn’t a problem.
1) I hope to publish a post soon specifically going into the help I gave fortify health and what help I can give future charities, but I can clarify briefly here. Charity Science Health—I was on the research team to pick the intervention + cofounded + worked full time in a co-ED position for the first 2 years of its existence. Effectively I was involved as much as one could be in a charity. Fortify Health—I was on the research team to pick the intervention + connected the co-founders when one reached out to me + Gave them a seed grant for their first 6 months + helped them in a consulting role ~5 hours a week over those 6 months. Effectively I was like a highly involved board member.
2) I think this is a huge concern, I generally think EA charities should be aiming to be the highest impact charity in a given field. E.g. a lot of the value of CSH comes from the small chance we can be higher impact than AMF. If CSH for example fell between the effectiveness of GD and AMF, CSH would pretty aggressively try to seek funding outside of the EA community (including GW/OPP). This partly to do with “the last dollar spent” in poverty likely being pretty high impact (see this post on talent gaps http://effective-altruism.com/ea/1ok/ea_doesnt_have_a_talent_gap_different_causes_have/ for more details). In something like AR, given the funding situation I think the more important consideration would be whether a new charity has a good chance of beating the bottom 25% of charities funded by OPP/ACE.
Connections in the field seems to be quite an important foundational issue, but whilst it may be a weak area generally, i think it can be an area where insufficient time is spent considering the importance of plurality. So if a certain group of people were asked to be part of the experts in the field then it could become fairly self recommending from there on in, particularly if it were resourced / various benefits flowed from it. I tend to view this as a bit of an issue within EAA, particularly at both ACE and the Open Philanthropy Project where approaches have a tendency to not be given equal consideration, instead some are valued highly (particularly those aligned with direct utilitarianism) over others.
I think this can then lead to other issues in terms of internal evaluation. So in-group bias wouldn’t be challenged because external evaluation has been devalued. Creating a bit of a problematic loop.