Despite being the one who wrote the original post I did think in writing it that trying to figure out if one cause is being underfunded compared another cause is a really difficult question to answer. Part of my motivation to write this was to see if anyone had any insights as to whether my claims were right or not.
ElliotTep
I agree that EA funds shouldn’t be distributed democratically, nor that “EA leaders” or survey participants are necessarily the right allocators. Do you think that the current resource allocation is being made by experts with “judgment, track record, and depth of thinking about cause prioritization”?
If I had to guess, I would say it is a combination of this, but also EA UHNW donor preferences, a cause’s ability to attract funding from other sources, etc.
Ideally we would survey some of the best grantmaking experts on cause prio, but I still found the EA survey and MCF survey to be a useful proxy, albeit flawed.
Ohh I like this. I think this articulates the pheomenon well. Thanks.
I agree re the career problem. I wonder how much additional money would fix the problem vs other issues like the cultures of the two movements/ecosystems, status of working in the spaces, etc.
Glad to hear. Welcome to the community Karen!
One take is that what is happening is that the movement cares more about animal welfare as a cause area over time, but that the care and concern for AI safety/x-risk reduction has increased even more, and so people are shifting their limited time and resources towards those cause areas. This leads to the dynamic of the movement wanting animal advocacy efforts to win, but not being the ones to dedicate their donations or career to the effort.
Thanks for sharing your thoughts Tyler. I tend to think that 2 & 3 tends to account for funding discrepancies.
I do think at the same time there might be a discrepancy in ideal and actual allocation of talent, with so many EAs focused on working in AI safety/x-risk reduction. To be clear I think these are incredibly important and think every, but that maybe a few EAs who are on the fence should work in animal advocacy.
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
Some rough thoughts: It’s when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it’s harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world’s poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I’m recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don’t want to speak for them my understanding is they might be doing more in this space soon.
I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you’re either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now.
I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper.
It’s great to hear that being on the front foot and reaching out to people with specific offers has worked for you.
I actually want to push back on your advice for many readers here. I think for many people who aren’t getting jobs, the reason is not because the jobs are too competitive, but that they’re not meeting the bar for that role. This seems more common for EAs with little professional experience, as many employers want applicants who have already been trained. In AI Safety, it also seems like for some parts of the problem, an exceptional level of talent or skill is needed to meaningfully contribute.
In addition to applying for more jobs or reaching out to people directly, I’d also recommend:
broadening your search to a wider array of roles.
apply to impactful work that is not on the 80k job board. most impactful jobs arent run at orgs where most people are ea.
get a few years of training under your belt and come back to these jobs, with I think a much higher chance of success. (see my post here)
I realise short timelines makes this all much harder, but I do think many people early in their career do their best work in the environment of an organisation, team, manager, etc.
As someone who just participated in a name change recently I can assure you the pros and cons of this name with other contenders was probably discussed ad nauseam by the team involved, and they decided on this name despite the nerdy and clunky vibe.
Approx how much absorbency/room for more funding is there in each cause area? How many good additional opportunities are there over what is currently being funded? How steep are the diminishing returns for an additional $10m, $50m, $100m, $500m?
Thanks for writing this, as someone who feels more at home in EA spaces I do sometimes feel like EAs are pretty critical of rationalist sub-culture (often reasonably) but take for granted the valuable things rationalism has contributed to EA ideas and norms.
Hi David, if I’ve understood you correctly, I agree that a reason to return home as for other priorities that have nothing to do with impact. I personally did not return home for the extra happiness or motivation required to stay productive, but because I valued these other things intrinsically, which Julia articulates better here: https://forum.effectivealtruism.org/posts/zu28unKfTHoxRWpGn/you-have-more-than-one-goal-and-that-s-fine
Ah man I feel you. To be honest I’ve been avoiding the abyss recently with some recent career vs family dilemmas. Lemme know if you want to have a chat sometime.
For sure. I think Chana does a good job of talking about some of the downsides of living in a hub similar to what you mention: https://forum.effectivealtruism.org/posts/ZRZHJ3qSitXQ6NGez/about-going-to-a-hub-1
Wow that’s gotta be one of the fastest forum post to plan changes on record. I’m glad to hear this resolved what sounds like a big and tough question in your life. As I mentioned in the post, I do think stints in hubs can be a great experience.
This seems like a good time to remind everyone of @JamesÖz 🔸classic post why you can justify almost anything using historical social movements. This seems especially true when a single anecdote is referenced. Maybe Bregman has more evidence behind this claim but he certainly hasn’t shared it in this post.