Anonymous feedback form: https://www.admonymous.co/kuhanj
kuhanj
kuhanj’s Quick takes
I’ve been feeling increasingly anxious and disappointed over the last couple of years about EA organizations and individuals (myself very much included) allocating resources and doing prioritization very suboptimally.
Reasons why I think this happens:
Not realizing our lack of clarity about how to most impactfully allocate resources (time, money, attention, etc)
Path dependence + insufficient intervention prioritization research (in terms of quality, quantity, and frequency). I thought this post brought up good points about the relative lack of cross-cause prioritization research + dissemination in the EA community despite its importance.
Being insufficiently open-minded about which areas and interventions might warrant resources/attention, and unwarranted deference to EA canon, 80K, Open Phil, EA/rationality thought leaders, etc.
Poorly applying heuristics as a substitute to prioritization work. See this comment and discussion about the neglectedness heuristic causing us to miss out on impact. FWIW I believe this very strongly and think the community has missed out on a ton of impact because of this specific mistake, but I’m unable to write about specifics in detail publicly. Feel free to reach out if you’d like to discuss this in private.
Aversion to feeling uncertainty and confusion (likely exacerbated by stress about AGI timelines).
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deference.
Being slow to re-orient to important, quickly-evolving technological and geopolitical developments, and being unresponsive to certain kinds of evidence (e.g. inside AI world—not taking into account important political developments, outside of AI world—not taking AGI into account).
Strong non-impact tracking incentives (e.g. strong social incentives to have certain beliefs, work at certain orgs, focus on certain topics), and weak incentives to figure out and act on what is actually most impactful. We don’t hear future beings (or current beings for the most part) letting us know that we could be helping them much more effectively by taking one action over another. We do feel judgment from our friends/in group very saliently.
Lacking the self-confidence/agency/courage/hero-licensing/interest/time/etc to figure things out ourselves, and share what we believe (and what we’re confused about) with others—especially when it diverges from the aforementioned common sources of deference.
This is a shame given how brilliant, dedicated, and experienced members of the community are, and how much insight people have to offer – both within the community, and to the broader world.
I’m collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We’re planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I’d be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you’re interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.
Community building in effective altruism (panel discussion)
Seems worth trying! I’d be interested in reading a write-up if you decide to run it.
A few quick thoughts:
Many arguments about the election’s tractability don’t hinge on the impact of donations.
Donating is not the only way to contribute to the election. Here is a public page showing the results of a meta-analysis on the effectiveness of different uses of time to increase turnout (though the number used to estimate the cost-effectiveness of fundraising is not sourced here). The analysis itself is restricted, but people can apply to request access.
Polling and historical data suggest this election has a good chance of being won by thousands to hundreds of thousands of swing-state votes. That means any intervention that can swing thousands of votes (or maybe hundreds) has a meaningful chance of swinging the election. I discussed some potential interventions in the post.
There is recent evidence that suggests there are events that quickly cause a large portion of voters to change their mind about who to vote for. Nate Silver wrote “The impact of Comey’s letter is comparatively easy to quantify, by contrast. At a maximum, it might have shifted the race by 3 or 4 percentage points toward Donald Trump, swinging Michigan, Pennsylvania, Wisconsin and Florida to him, perhaps along with North Carolina and Arizona. At a minimum, its impact might have been only a percentage point or so. Still, because Clinton lost Michigan, Pennsylvania and Wisconsin by less than 1 point, the letter was probably enough to change the outcome of the Electoral College.” (See full article for more details/analysis). That said, others disagree about the effect of Comey’s email + media reporting.
Other heuristics mentioned in the post (like the reasons for campaign-related work being undesirable).
The above numbers are based on RCTs and information shared with me by multiple organizations. I’m sorry I’m unable to share more details publicly, I’m respecting the preferences of these organizations.
These can’t be shared publicly, but I’ll DM you.
Focusing your impact on short vs long TAI timelines
To add a bit of context in terms of on-the-ground community building, I’ve been working on EA and AI safety community building at MIT and Harvard for most of the last two years (including now), though I have been more focused on AI safety field-building. I’ve also been helping out with advising for university EA groups, workshops/retreats for uni group organizers (both EA and AI safety), and organized beginning-of-year residencies at a few universities to support beginning-of-year EA outreach in 2021 and 2022 along with other miscellaneous EA CB projects (e.g. working with the CEA events team last year).
I do agree though that my experience is pretty different from that of regional/city/national group organizers.
Good catch—added that to the eligibility section for the AAAS Rapid Response Cohort in AI blurb. Thanks!
Want to work on US emerging tech policy? Consider the Horizon Fellowship, TechCongress, and AAAS AI Fellowship
I would guess the ratio is pretty skewed in the safety direction (since uni AIS CB is generally not counterfactually getting people interested in AI when they previously weren’t, if anything EA might have more of that effect), so maybe something in the 1:10 − 1:50 range (1:20ish point estimate for median capabilities research: median safety research contribution ratio from AIS CB)?
I don’t really trust my numbers though. This ratio is also more favorable now than I would have estimated a few months/years ago, when contribution to AGI hype from AIS CB would have seemed much more counterfactual (but also AIS CB seems less counterfactual now that AI x-risk is getting a lot of mainstream coverage).
AI Safety Field Building vs. EA CB
Upcoming speaker series on emerging tech, national security & US policy careers
Announcing the Cambridge Boston Alignment Initiative [Hiring!]
Tips + Resources for Getting Long-Term Value from Retreats/Conferences (and in general)
I think donations in the next 2-3 days would be very useful (probably even more useful than door-knocking and phone-banking if one had to pick) for TV ads, but after that the benefits diminish somewhat steeply over the remaining days.
I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I’ve been meaning to share for a while (which I’ll reproduce below), so thank you!
***
I’ve been feeling increasingly anxious and disappointed over the last couple of years about EA organizations and individuals (myself very much included) allocating resources and doing prioritization very suboptimally.
Reasons why I think this happens:
Not realizing our lack of clarity about how to most impactfully allocate resources (time, money, attention, etc)
Path dependence + insufficient intervention prioritization research (in terms of quality, quantity, and frequency). I thought this post brought up good points about the relative lack of cross-cause prioritization research + dissemination in the EA community despite its importance.
Being insufficiently open-minded about which areas and interventions might warrant resources/attention, and unwarranted deference to EA canon, 80K, Open Phil, EA/rationality thought leaders, etc.
Poorly applying heuristics as a substitute to prioritization work. See this comment and discussion about the neglectedness heuristic causing us to miss out on impact. FWIW I believe this very strongly and think the community has missed out on a ton of impact because of this specific mistake, but I’m unable to write about specifics in detail publicly. Feel free to reach out if you’d like to discuss this in private.
Aversion to feeling uncertainty and confusion (likely exacerbated by stress about AGI timelines).
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deference.
Being slow to re-orient to important, quickly-evolving technological and geopolitical developments, and being unresponsive to certain kinds of evidence (e.g. inside AI world—not taking into account important political developments, outside of AI world—not taking AGI into account).
Lacking the self-confidence/agency/courage/hero-licensing/interest/time/etc to figure things out ourselves, and share what we believe (and what we’re confused about) with others—especially when it diverges from the aforementioned common sources of deference.
This is a shame given how brilliant, dedicated, and experienced members of the community are, and how much insight people have to offer – both within the community, and to the broader world.
I’m collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We’re planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I’d be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you’re interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.