Suppose that, on some level of general competence, Alice is 95th percentile among EAs on the Forum and is working on her own EA project independently, while Bob is of 30th percentile competence and is working on his project while socially immersed in his many in-person EA contacts.
I agree being immersed is important because risks are hard to anticipate for a single individual. I would argue that the scenario seems somewhat artificial, as general competence of someone not interacting with EAs is unlikely to be in the 95th percentile.
However, once these domain-specific pitfalls are pointed out to you, it’s not that cognitively taxing to grok them and adjust your thinking/actions accordingly.
I agree. However, this is not really about skill or intelligence: Humans in general often don’t take critical feedback nearly as seriously as they should, and often don’t adjust their thinking/actions due to sunk costs, wanting to save face in their peer group, grandiose personality, etc. This also applies to EAs (maybe somewhat but not vastly less so).
From looking at the published list of EA Hotel residents, I tentatively think some people’s work might come with high downside risk, while others have high upside potential and seem worth supporting. I’m not sure how this balances out. Discussing individual projects in public seems difficult, which is maybe part of the reason why people find the arguments against funding the EA Hotel unconvincing. All else equal, I’d probably prefer something like Aaron Gertler’s approach of “looking at the Hotel’s guest list, picking the best-sounding project, and offering money directly to the person behind it.” I have also shared some thoughts for how to design the admission process with EA Hotel staff.
(If one accepted the premise that downside risk is prevalent and significant, one could argue that any donation to the EA Hotel that doesn’t set incentives to reduce downside risk might counterfactually replace a donation that does. I’m not sure this argument works, but it could be worth thinking about.)
(All my personal opinion, not speaking for anyone here.)
Edited to add: In many ways, the EA Hotel acts like a de facto EA grantmaker, so the concerns outlined in my comment here apply:
When long-termist grant applications don’t get funded, the reason usually isn’t lack of funding, but one of the following:
--The grantmaker was unable to vet the project (due to time constraints or lack of domain expertise) or at least thought it was a better fit for a different grantmaker.
--The grantmaker thought the project came with a high risk of accidentalharm.
(…)
High-quality grant applications tend to get funded quickly and are thereby eliminated from the pool of proposals available to the EA community, while applicants with higher-risk proposals tend to apply/pitch to lots of funders. This means that on average, proposals submitted to funders will be skewed towards high-downside-risk projects, and funders could themselves easily do harm if they end up supporting many of them.
Humans in general often don’t take critical feedback nearly as seriously as they should, and often don’t adjust their thinking/actions due to sunk costs, wanting to save face in their peer group, grandiose personality, etc. This also applies to EAs (maybe somewhat but not vastly less so).
Agree that this can be tough (from experience). I would add that it can be emotionally draining, especially if the feedback is uncharitable or has some basis of misunderstanding or factual error. Also, it can be further complicated if after reflection one still doesn’t fully agree with the feedback and there is a genuine philosophical disagreement. (NB I’m happy to have received feedback from Jonas Vollmer and think it has/will make the EA Hotel project stronger; “uncharitable or has some basis of misunderstanding or factual error” does not apply to his feedback).
Aaron Gertler’s approach of “looking at the Hotel’s guest list, picking the best-sounding project, and offering money directly to the person behind it.”
This does require the hotel to exist though (or something like it). See my comment here.
I have also shared some thoughts for how to design the admission process with EA Hotel staff.
Based on some initial ideas from Jonas, we are working on a rating system for applicants and ongoing hosted projects. Tentatively it might be something like a logarithmic scale of EV, {-5,+5} with +1 = giving the money to GiveDirectly*. Trustees/Manager in one anonymous pool, Advisors in another. Bayesian priors stated in words. 95% confidence intervals given. Another round of scoring after seeing others’ input and discussion (special care taken to discuss when ratings <=-1 are given). Final scores aggregated. Guests accepted if clearing a bar of +1 (to increase with diminishing capacity). If falling below, guests have 3 months to pivot/improve.
*Would be interested in comparing with any numerical schemes other EA grantmakers are using.
I agree being immersed is important because risks are hard to anticipate for a single individual. I would argue that the scenario seems somewhat artificial, as general competence of someone not interacting with EAs is unlikely to be in the 95th percentile.
I agree. However, this is not really about skill or intelligence: Humans in general often don’t take critical feedback nearly as seriously as they should, and often don’t adjust their thinking/actions due to sunk costs, wanting to save face in their peer group, grandiose personality, etc. This also applies to EAs (maybe somewhat but not vastly less so).
From looking at the published list of EA Hotel residents, I tentatively think some people’s work might come with high downside risk, while others have high upside potential and seem worth supporting. I’m not sure how this balances out. Discussing individual projects in public seems difficult, which is maybe part of the reason why people find the arguments against funding the EA Hotel unconvincing. All else equal, I’d probably prefer something like Aaron Gertler’s approach of “looking at the Hotel’s guest list, picking the best-sounding project, and offering money directly to the person behind it.” I have also shared some thoughts for how to design the admission process with EA Hotel staff.
(If one accepted the premise that downside risk is prevalent and significant, one could argue that any donation to the EA Hotel that doesn’t set incentives to reduce downside risk might counterfactually replace a donation that does. I’m not sure this argument works, but it could be worth thinking about.)
(All my personal opinion, not speaking for anyone here.)
Edited to add: In many ways, the EA Hotel acts like a de facto EA grantmaker, so the concerns outlined in my comment here apply:
Agree that this can be tough (from experience). I would add that it can be emotionally draining, especially if the feedback is uncharitable or has some basis of misunderstanding or factual error. Also, it can be further complicated if after reflection one still doesn’t fully agree with the feedback and there is a genuine philosophical disagreement. (NB I’m happy to have received feedback from Jonas Vollmer and think it has/will make the EA Hotel project stronger; “uncharitable or has some basis of misunderstanding or factual error” does not apply to his feedback).
This does require the hotel to exist though (or something like it). See my comment here.
Based on some initial ideas from Jonas, we are working on a rating system for applicants and ongoing hosted projects. Tentatively it might be something like a logarithmic scale of EV, {-5,+5} with +1 = giving the money to GiveDirectly*. Trustees/Manager in one anonymous pool, Advisors in another. Bayesian priors stated in words. 95% confidence intervals given. Another round of scoring after seeing others’ input and discussion (special care taken to discuss when ratings <=-1 are given). Final scores aggregated. Guests accepted if clearing a bar of +1 (to increase with diminishing capacity). If falling below, guests have 3 months to pivot/improve.
*Would be interested in comparing with any numerical schemes other EA grantmakers are using.
I agree. The changes you’re making seem great! I also like the concise description.
(Will get back on some of the details via email, e.g., not sure 95% CIs are worth the effort.)
(Strong upvoted.)