Is effective altruism growing? An update on the stock of funding vs. people
This is a cross-post from 80,000 Hours. See part 2 on the allocation across cause areas. See a short update as of Aug 2022.
In 2015, I argued that funding for effective altruism – especially within meta and longtermist areas – had grown faster than the number of people interested in it, and that this was likely to continue. As a result, there would be a funding ‘overhang’, creating skill bottlenecks for the roles needed to deploy this funding.
A couple of years ago, I wondered if this trend was starting to reverse. There hadn’t been any new donors on the scale of Good Ventures (the main partner of Open Philanthropy), which meant that total committed funds were growing slowly, giving the number of people a chance to catch up.
However, the spectacular asset returns of the last few years and the creation of FTX, seem to have shifted the balance back towards funding. Now the funding overhang seems even larger in both proportional and absolute terms than 2015.
In the rest of this post, I make some rough guesses at total committed funds compared to the number of interested people, to see how the balance of funding vs. talent might have changed over time.
This will also serve as an update on whether effective altruism is growing – with a focus on what I think are the two most important metrics: the stock of total committed funds, and of committed people.
This analysis also made me make a small update in favour of giving now vs. investing to give later.
Here’s a summary of what’s coming up:
How much funding is committed to effective altruism (going forward)? Around $46 billion.
How quickly have these funds grown? About 37% per year since 2015, with much of the growth concentrated in 2020–2021.
How much is being donated each year? Around $420 million, which is just over 1% of committed capital, and has grown maybe about 21% per year since 2015.
How many committed community members are there? About 7,400 active members and 2,600 ‘committed’ members, growing 10–20% per year 2018–2020, and growing faster than that 2015–2017.
Has the funding overhang grown or shrunk? Funding seems to have grown faster than the number of people, so the overhang has grown in both proportional and absolute terms.
What might be the implications for career choice? Skill bottlenecks have probably increased for people able to think of ways to spend lots of funding effectively, run big projects, and evaluate grants.
To caveat, all of these figures are extremely rough, and are mainly estimated off the top of my head. I haven’t checked them with the relevant donors, so they might not endorse these estimates. However, I think they’re better than what exists currently, and thought it was important to try to give some kind of rough update on how my thinking has changed. There are likely some significant mistakes; I’d be keen to see a more thorough version of this analysis. Overall, please treat this more like notes from a podcast than a carefully researched article.
Which growth metrics matter?
Broadly, the future[1] impact of effective altruism depends on the total stock of:
The quantity of committed funds
The number of committed people (adjusted for skills and influence)
The quality of our ideas (which determine how effectively funding and labour can be turned into impact)
(In economic growth models, this would be capital, labour, and productivity.)
You could consider other resources like political capital, reputation, or public support as well, though we can also think of these as being a special type of labour.
In this post, I’m going to focus on funding and labour. (To do an equivalent analysis for ideas, which could easily matter more, we could try to estimate whether the expected return of our best way of using resources is going up or down, with some kind of adjustment for diminishing returns.)
For both funding and labour, we can look at the growth of the stock of that resource, or the growth of how much of that resource is deployed (i.e. spent on valuable projects) each year.
If we want to estimate how quickly effective altruism is growing, then I think the stock is most relevant, since that determines how many resources will be deployed in the long term.
It’s true there’s no point having a big stock of resources if it’s not being deployed – so we should also want to see growth in deployed resources – however there can be good reasons to delay deployment while the stock is still growing, such as (i) to gain better information about how to spend it, (ii) to build up grantmaking capacity, or (iii) to accumulate investment returns and career capital. So, if forced to choose between stock and deployment, I’d choose the stock as the best measure of growth.
Both the stock of resources and the amount deployed each year are also more important than ‘top-of-funnel’ metrics (like Google search volume for ‘effective altruism’) though we should watch the top-of-funnel metrics carefully – especially insofar as they correlate with future changes in the stock.
Finally, I think it’s very important to try to make an overall estimate of the total stock of resources. It’s possible to come up with a long list of EA growth metrics, but different metrics typically vary by one or two orders of magnitude in how important they are. Typically most growth is driven by one or two big sources, so many metrics can be stagnant or falling while the total resources available are exploding.
How many funds are committed to effective altruism?
Here are some very, very rough figures:
I’ve tried to focus on funds that are already ‘committed’. I mostly haven’t adjusted them for the chance the person gives up on EA (except for GWWC), but I’ve also ignored the net present value of likely commitments from new future donors.
I’m aware of at least one new donor who is pushing ahead with plans to donate to longtermist issues at around $100 million per year, with perhaps a net present value in the tens of billions.
There are several other billionaires who seem sympathetic to EA (e.g. Reid Hoffman has donated to GPI) – these are ignored.
I’m also ignoring people like Bill Gates who donate to things that EAs would often endorse.
Bear in mind that these figures are extremely volatile – e.g. the value of FTX could easily fall 80% in a market crash, or if a competitor displaces it. Many of the stakes that the wealth comes from are also fairly illiquid – if the owners tried to sell a significant fraction, it could crash the price.
Side note for EA investors
As an individual EA who’s fairly value-aligned with other EA donors, you should invest in order to bring the overall EA portfolio in line with the ideal EA portfolio, and to prefer assets that are uncorrelated with other EAs. The current EA portfolio is highly tilted towards Facebook and FTX, and more broadly towards Ethereum/decentralised finance and big U.S. tech companies. This overweight is much more significant if we risk-weight rather than capital-weight. For instance, Ethereum and FTX equity are probably about 5x more volatile and risky than Facebook stock, and so account for the majority of our risk allocation. This means you should only hold assets highly correlated to these if you think this overweight should be increased even further. It seems likelier to me that most EAs should underweight these assets in order to diversify the portfolio.
How quickly have committed funds grown?
The committed funds are dominated by Good Ventures and FTX, so to estimate total growth, we mainly need to estimate how much they’ve grown:
-
In 2015, Forbes estimated Moskovitz’s net worth was $8 billion, so it has grown by 2.6x since then (about 20% per year). This is probably due to (i) Facebook stock price appreciation and (ii) the Asana IPO.
-
FTX didn’t exist in 2015.
The impact of these two sources alone is growth of $33 billion since 2015. The new total of $39 billion from both is about five-fold growth compared to $8 billion in 2015.
The other sources make up a minority of the funds, but my rough estimate is they have grown around 2.5x since 2015.
For instance, GiveWell donors (excluding Open Phil) were giving $80 million per year in 2019, up from about $40 million in 2015. We don’t yet have the finalised figures for 2020, but it seems to be significantly higher – perhaps $120 million (see below).
Many of the sources have grown faster. As one example, in 2015 David Goldberg estimated the value of pledges made by Founders Pledge members at $64 million, compared to over $3 billion today.
In total, I’d guess that committed funds in 2015 were about $10 billion, so have grown 4.6x. This is 37% per year over five years.
You might worry that most of this growth was concentrated in the earlier years, and that recent growth has been slow. My guess is that if anything the opposite is the case – growth has been concentrated in the last 1–2 years, in line with the recent boom in technology stocks and cryptocurrencies, and the creation of FTX.
The situation for each cause could be different. My impression is that the funds available for longtermist and meta grantmaking have grown faster than those for global health.
How much funding is being deployed each year?
In early 2020, I estimated that the EA community was deploying about $420 million per year.
Around 60% was through Open Philanthropy, 20% through other GiveWell donors, and 20% from everyone else. The Open Phil grants were based on an average of their giving 2017–2019, which helps to smooth out big multi-year grants.
$420 million per year would be just over 1% of committed capital.
Even those who are relatively into patient philanthropy think we should aim to donate over 1% per year, and at the 2020 EA Leaders Forum, the median estimate was that we should aim to donate 3% of capital per year.
So, if we’re now at 1% per year, that’s one argument that we should aim to tilt the balance towards giving now rather than investing to give later. In contrast, in early 2020, I thought that longtermist donors were giving more like 3% of capital per year, so it wasn’t obvious whether this was too low or too high. (This argument is fairly weak by itself – the quality of the particular opportunities and our ability to make good grants are also big factors.)
How quickly have deployed funds grown?
Since 60% comes via Open Philanthropy, we can mainly look to their grants.
Around 2014–2015, Open Philanthropy was only making grants of around $30 million per year, which rapidly grew to a new plateau of $200–$300 million by 2017.
At that point, they decided to hold deployed funds constant for several years, in order to evaluate their progress and build staff capacity before trying to scale further.
Dustin Moskovitz and Cari Tuna have said they want to donate everything within their lifetimes. This will require hitting around $1 billion deployed per year fairly soon, which I expect to happen. The Metaculus community agrees, forecasting donations of over $1 billion per year by 2030 in a median scenario.
Note that the grants are very lumpy year-to-year. One reason for this is that Open Philanthropy sometimes makes three- or five-year commitments which all accrue to the first year. For instance, I think 2017 is unusually high due to the grants to OpenAI and malaria gene drives. Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017 (see the next chart below), which was due to deciding to allocate more to their longtermist bucket, which is distributing more slowly than global health. You’ll get a more accurate impression from taking a three-year (or five-year) moving average, which currently stands at ~$240 million. (The chart below is from Applied Divinity Studies.)
FTX is new, so the founders have only been giving millions per year. Their money is not yet highly liquid, and they haven’t created a foundation, so we should expect it to remain low for a while, but eventually increase to hundreds of millions.
Money moved by GiveWell (excluding Open Philanthropy) hit a flat period from 2015–2017, but seems to have started growing again in 2018, by over 30% per year. I believe the 2020 figures are on track to be even better than 2019, but aren’t shown on this chart (or included in my deployed funds estimate).
My impression from the data I’ve seen is that funds donated by GWWC members, EA Funds, Founders Pledge members, Longview Philanthropy, SFF, etc. have all grown significantly (i.e. more than doubling) in the last five years.Overall, I estimate the community would have been deploying perhaps $160 million per year in 2015, so in total this has grown 2.6-fold, or 21% per year over five years – somewhat slower than the growth of the stock of committed capital, but roughly in line with the number of people.
Looking forward, my best guess is that this rate of growth continues for the next 5-10 years.
How many engaged community members are there?
The best estimate I’m aware of is by Rethink Priorities using data from the 2019 EA Survey:
We estimate there are around 2,315 highly engaged EAs and 6,500 (90% CI: 4,700–10,000) active EAs in the community overall.
‘Highly engaged’ is defined as those who answered 4 or 5 out of 5 for engagement in the survey, and ‘active’ is those who answered 3, 4 or 5.
A 4 on this scale is a fairly high bar for engagement — e.g. many people who’ve made career changes we’ve tracked at 80,000 Hours only report ‘4’ on this scale).
In 2020, I estimate about 14% net growth (see the next section), bringing the total number of active EAs to 7,400.
You can see some more statistics on what these people are like in the EA Survey.
If we were to consider the number of people interested in effective altruism, it would be much higher. For instance, at 80,000 Hours we have about 150,000 people on our newsletter, and over 100,000 people have bought a copy of Doing Good Better.
How quickly has the number of engaged community members grown?
Unfortunately, it’s still very hard to estimate the growth rate in the number of committed people, since the data are plagued with selection effects and lag effects.
For instance, the data I’ve seen shows that it often takes several years for someone to go from having first heard about EA to filling out the EA Survey, and from there to reporting themselves as ‘4’ or ‘5’ for engagement. This means that many of the new members from the last few years are not yet identified – so most ways of measuring this growth will undercount it.
In mid 2020, I made 6 estimates for the annual growth rate of committed members, which fell in the range of 0-30% in the last 1–2 years. My central estimate was around 20% (+900 per year at ‘4’ or ‘5’ on the engagement scale in the survey).
More recently, we were able to re-use the method Rethink Priorities used in the analysis above, but with data from the 2020 EA Survey rather than 2019. This analysis found the total number of engaged EAs has grown about 14% in the last year, so would now be 7,400.
This is fairly uncertain, and there’s a reasonable chance the number of people didn’t grow in 2020.
The percentage growth rate would have been a lot higher in 2015–2017, since the base of members was much smaller, and I also think those were unusually good years for getting new people into EA.
Around 2017, there was a shift in strategy from reaching new people to getting those who were already interested into high-impact jobs. This meant that ‘top-line’ metrics — such as web reach and media impressions — slowed down.
My take is that this shift in strategy was at least partially successful, insofar as the number of committed EAs and their influence has continued to grow, despite flattish top-line metrics. (Though there’s a reasonable chance EA could have grown even faster if the top-line growth had continued.)
Going forward, we’ll eventually need to get the top-of-funnel metrics growing again, or the stock of ‘medium’ engaged people will run out, and the number of ‘highly’ engaged people will stop growing. It seems like several groups are prioritising outreach to new people more highly going forward.
What about the skill level of the people involved?
This is a big uncertainty because one influential member (e.g. in a senior position at the White House) can achieve what it might take thousands of others to achieve.
My sense is that the typical influence and skill level of members has grown a lot, partly just because people have grown older and advanced their careers. For example, there are now a number of interested people in senior government positions in the U.K. and U.S. who weren’t there in 2015. The average age of community members is several years higher.
In terms of the level of ‘talent’ of new members, we don’t have great data. Impressions seem to be split between the level being similar to the past and being a bit lower. So if we averaged the two, in expectation there would be a small decrease.
How much labour is being deployed?
It’s hard to estimate how much labour is being ‘deployed’. People can at most deploy one year per year if they focus on impact, but a proper estimate should account for:
-
What fraction of people are focused on impact compared to career capital. According to the 2020 EA Survey, around two community members are prioritising career capital per person prioritising immediate impact.
-
Increasing productivity over time. The mean age in the community is 29, but most people only hit peak productivity around age 40–60 (though most of the increase happens by the early 30s).
-
Discounting the value of future years, especially for ‘drop out’, though potentially including the labour of future recruits.
Overall, my guess is that we’re only deploying 1–2% of the net present value of the labour of the current membership. This could be an argument for shifting the balance a bit more towards immediate impact rather than career capital – though this is a really complicated question. Young people often have great opportunities to build career capital, and if those opportunities increase their lifetime impact, they should take them, no matter what others in the community are doing.
How quickly has deployed labour increased?
If the percentage of people focused on career capital vs. impact is similar over time, then it should track the stock – so we can try to track that going forward.
To claw together some rough data, according to the 2019 EA Survey, about 260 people said they’re working at ‘an EA org’ (which includes object-level charities). With an estimated 40% response rate, that would imply 650 people in total, which seems a lot higher than what I would have guessed in 2015.
My impression is that most of the most central EA orgs have also grown headcount ~20% per year (e.g. CEA, MIRI, 80K), roughly doubling since 2015, and keeping pace with growth in the number of people.
Working at an EA org is only one option, and a better estimate would aim to track the number of people ‘deployed’ in research, policy, earning to give, etc. as well.
Changes in the overhang: how quickly has funding grown compared to people?
In 2015, I argued there was a funding overhang within meta and longtermist causes. (Though note it’s less obvious there’s a funding overhang within global health, and to a lesser extent animal welfare.) How has this likely evolved?
During 2017–2019, I thought the number of people might have been catching up, but in 2020, it seemed like the growth of each had been roughly similar. A similar rate of proportional growth would mean the absolute size of the overhang was increasing.
As of July 2021 and the latest FTX deal, I now think the amount of funding has grown faster than people, making the growth of the overhang even larger.
Here are some semi-made-up numbers to illustrate the idea:
Suppose that, in 2015:
There was $10 billion
There were 2,500 people
30% of people are to be employed by EA donors
The average cost of employing someone is $100,000
In that case, it would take $75 million per year to employ them. But $10 billion can generate perpetual income of $200 million,[2] so the overhang is $125 million per year.
Suppose that in 2021:
There is $50 billion
There are 7,500 people.
Then, it would take $225 million to employ 30% of them, but you can generate $1,000 million of income with that capital, so the overhang is $775 million per year.
Another way to try to quantify the overhang is to try to estimate the financial value of the labour, and compare is to the committed funding. My rough estimate is that the labour is worth $50k to $500k per year. If, after accounting for drop out, the average career has 20 years remaining, that would be $1m – $10m per person. (If this seems high, note that it’s driven largely by outliers.) If there are 7,400 people, that would be $7.4bn - $74bn in total (central around $20bn). In comparison, I estimated there is almost $50bn of committed capital, so the value of the labour is most likely lower. In contrast, in the economy as a whole, I think human capital is normally thought to be worth more than physical capital, so the situation in effective altruism is most likely the reverse of the norm.
Note that if there’s an overhang, the money can be invested to deploy later, or spent employing people outside of the community (e.g. funding academic research), so it’s not that the money is wasted – it’s more that we’ll end up missing some especially great opportunities that could have been taken otherwise. These will especially be opportunities that are best tackled by people who deeply share the EA mindset. I’ll talk more about the implications later.
What about the future of the balance of funding vs. people?
Financial investment returns
A major driver of the stock of capital will be the investment returns of Facebook, Asana, FTX, and Ethereum.
If there’s a crash in tech stocks and cryptocurrencies (which seems fairly likely in the short term), the balance could move somewhat back towards people.
In the longer term, I’ll leave it to the reader to forecast the future returns of a portfolio like the above.
Personally, I feel uneasy projecting that U.S. tech stocks will return more than 1–5% per year, due to their high valuations. I expect cryptocurrencies will return more, but with much higher risk.
New donors vs. new members
If we assume that effective altruism will keep growing, and won’t collapse, and that key existing donors will remain supporters, does it seem harder to grow the number of donors or the number of members who aren’t donors?
As noted above, I think it’s more likely than not that another $100 million per year/$20 billion NPV donor enters the community within the coming years. This would be roughly ~40% growth in the total stock, compared to 15% per year growth in people, which would shift the balance even more towards funding.
The Metaculus community also estimates there’s a 50% chance of another Good Ventures-scale donor within five years.
After this, I expect it’ll become harder to grow the pool of committed funds at current rates.
Going from $60 billion to $120 billion would require convincing a 100+ billionaire like Jeff Bezos to give a large fraction of their net worth to EA-aligned causes, or might require convincing around 10 ‘regular’ billionaires.
That said, it seems possible. For instance, the total pledged by all members of the Giving Pledge is around $600 billion, so if 20% of them were into EA, that would be $120 billion; four-fold growth from today.
The total U.S. philanthropic sector is $400 billion per year, so if 1% of that was EA aligned, that would be $4 billion per year, which is 10-fold growth from today, and three-fold growth from where I expect us to be in 5–10 years.
Expanding the number of committed community members from around 5,000 to around 50,000 seems somewhat more achievable given enough time.
-
Only about 10% of U.S. college graduates have even heard of EA, let alone seriously considered its ideas, and it’s even less well known in non-English-speaking countries.
-
A recent survey of Oxford students found that they believed the most effective global health charity was only ~1.5x better than the average — in line with what the average American thinks — while EAs and global health experts estimated the ratio is ~100x. This suggests that even among Oxford students, where a lot of outreach has been done, the most central message of EA is not yet widely known.
If it seems easier to grow the number of people 10-fold than to grow the committed funds 10-fold, then I expect the size of the overhang will eventually decrease, but this could easily take 20 years, and I expect the overhang is going to be with us for at least the next five years.
A big uncertainty here is what fraction of people will ever be interested in EA in the long term – it’s possible its appeal is very narrow, but happens to include an unusually large fraction of wealthy people. In that case, the overhang could persist much longer.
Human capital investment returns
One other complicating factor is that, as noted, people’s productivity tends to increase with age, and many community members are focused on growing their career capital.
For instance, if someone goes from a masters student to a senior government official, then their influence has maybe increased by a factor of 1,000. This could enable the community to achieve far more, and to deploy far more funds, even if the number of people doesn’t grow that much.
Implications for career choice
Here are some very rough thoughts on what this might mean for people who want high-impact careers and feel aligned with the current effective altruism community. I’m going to focus on longtermist and meta causes, since they’re what I know the best and where the biggest overhang exists.
Which roles are most needed?
The existence of a funding overhang within meta and longtermist causes created a bottleneck for the skills needed to deploy EA funds, especially in ways that are hard for people who don’t deeply identify with the mindset.[3]
We could break down some of the key leadership positions needed to deploy these funds as follows:
Researchers able to come up with ideas for big projects, new cause areas, or other new ways to spend funds on a big scale
EA entrepreneurs/managers/research leads able to run these projects and hire lots of people
Grantmakers able to evaluate these projects
These correspond to bottlenecks in ideas, management, and vetting, respectively.
Given that many of the most promising projects involve research and policy, I’d say there’s a special need to have these skills within those sectors, as well as within the causes longtermists are most focused on, such as AI and biosecurity (e.g. someone who can lead an AI research lab; the kind of person who can found CSET). That said, I hope that longtermists expand into a wider range of causes, and there are opportunities in other sectors too.
Putting the funding overhang aside, the skill sets listed above would still be valuable: as an illustration, these skills also seem very valuable within global health – and typically more valuable than earning to give – though there’s less obviously an overhang there.
But the presence of the overhang makes them even more valuable. Finding an extra grantmaker or entrepreneur can easily unlock millions of dollars of grants that would otherwise be left invested.[4]
I’ve thought these roles were some of the most needed in the community since 2015, and now that the overhang seems even bigger — and seems likely to remain big for 10 years — I think they’re even more valuable than I did back then.
Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10 (where this hugely depends on fit and the circumstances).
This would also mean that if you have a 10% chance of succeeding, then the expected value of the path is $300,000–$2 million per year (and the value of information will be very high if you can determine your fit within a couple of years).
The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles. For each person in a leadership role, there’s typically a need for at least several people in the more junior versions of these roles or supporting positions — e.g. research assistants, operations specialists, marketers, ML engineers, people executing on whatever projects are being done, etc.
I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million per year (again, with huge variance depending on fit).
The bottleneck for supporting roles has, however, been a bit smaller than you might expect, because the number of these roles was limited by the number of people in leadership positions able to create these positions.
I think for the more junior and supporting roles there was also a vetting bottleneck. I’m unsure if there were infrastructure or coordination bottlenecks beyond the factors mentioned, but it seems plausible.
How should individuals respond to these needs?
If you might be able to help fill one of these key bottlenecks, there’s a good chance it’ll be the highest impact thing you can do.
Ideally, you can shoot for the tail outcome of a leadership role within one of these categories (e.g. becoming a grantmaker, manager, or someone who finds a new cause area). Aiming for a leadership position also sets you up to go into a highly valuable supporting or more junior equivalent role (e.g. being a researcher for a grantmaker, or being an operations specialist working under a manager).
Your next step will likely involve trying to gain career capital that will accelerate you in this path. Depending on what career capital you focus on, there could be many other strong options you could switch to otherwise (e.g. government jobs).
Be aware that the leadership-style roles are very challenging – besides being smart and hardworking, you need to be self-motivated, independently minded, and maybe creative. They also typically require deep knowledge of effective altruism, and a lot of trust from — and a good reputation within — the community. It’s difficult to become trusted with millions of dollars or a team of tens of people.
So, no one should assume they’ll succeed, and everyone should have a backup plan.
The ‘supporting’ roles are also more challenging than you might expect. Besides also requiring a significant amount of skill and trust (though less than the leadership roles), there’s a lack of mentorship capacity, and their creation is limited by the number of people in leadership roles.
On our job board, I made a quick count of 40 roles like these within our top recommended problem areas posted within the last two months, so there are perhaps 240 per year.
This compares to about 7,400 engaged community members, of which perhaps about 1,000 are early career and looking to start these kinds of jobs.
So there are a significant number of opportunities, and given their impact I think many people should pursue them, but it’s important to know there’s a reasonable chance it doesn’t work out.
If you’re unsure of your chances of eventually being able to land a supporting role, then build career capital towards those roles, but focus on ways of gaining career capital that also take you towards 1–2 other longer-term roles you find attractive.
I want to be honest about the challenges of these roles so that people know what they’re in for, but I’m also very concerned about being too discouraging.
We meet many people who are under-confident in their abilities, and especially their potential to grow over the years.
I think it’s generally better to aim a bit high than too low. If you succeed, you’ll have a big impact. If it doesn’t work out, you can switch to your plan B instead.
Trying to fill the most pressing skill bottlenecks in the world’s most pressing problems is not easy, and I respect anyone who tries.
What does this mean for earning to give?
The success of FTX is arguably a huge vindication of the idea of earning to give, and so in that sense it’s a positive update.
On balance, however, I think the increase in funding compared to people is an update against the value of earning to give at the margin.
This doesn’t mean earning to give has no value:
Medium-sized donors can often find opportunities that aren’t practical for the largest donors to exploit – the ecosystem needs a mixture of ‘angel’ donors to compliment the ‘VCs’ like Open Philanthropy. Open Philanthropy isn’t covering many of the problem areas listed here and often can’t pursue small individual grants.
You can save money, invest it, and spend when the funding overhang has decreased, or in order to practice patient philanthropy more generally.
You could support causes that seem more funding constrained, like global health.
But I do think the relative value of earning to give has fallen over time, as the overhang has increased.
Overall, I would encourage people early in their career to very seriously consider options besides earning to give first.
If you’re already earning to give — and especially if you don’t seem to have a chance of tail outcomes (e.g. startup exit) — I’d encourage you to seriously consider whether you could switch.
That said, there are definitely people for whom earning to give remains their overall top option, especially if they have personal constraints, can’t find another role that’s a good fit, have unusually high earnings, or are learning a lot from their job (and might switch out later).
Other jobs
I’ve focused on earning to give and jobs working ‘directly’ to deploy EA funds, but I definitely don’t want to give the impression these are the only impactful jobs.
I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital.
For instance, it would be possible for the community to have an absolutely massive impact via improving government policy around existential risks, and this doesn’t require anyone to get a job ‘in EA’.
I don’t discuss them more here because they don’t require EA funding to pursue, so their expected impact isn’t especially affected by the size of the funding overhang. I’d still encourage readers to consider them.
Related posts
What’s next
If you think you might be able to help deal with one of the key bottlenecks mentioned, or are interested in switching out of earning to give, we’ve recently ended the waitlist for our one-on-one advice, and would encourage you to apply.
You might also be interested in:
-
What the effective altruism community most needs. A conversation between me and Arden Koehler, covering talent vs. funding gaps in more depth.
-
Why do some organisations say their recent hires are worth so much?
Stay up to date on new research like this by following me on Twitter.
- ↩︎
Looking backwards, the main thing we care about is actual impact. Personally I think the EA community has had a lot of success doing things like turning AI safety into an accepted field, funding malaria prevention, scaling up cage-free campaigns etc., though this is a matter of judgement.
- ↩︎
I assume a perpetual withdrawal rate of 2%. Studies of the market in the past often find that it’s possible to withdraw 2–3% from a portfolio that’s mainly equities and not decrease your capital in real terms. 2% is also in line with the current dividend yield of global stocks – the capital should roughly track global nominal GDP, so the 2% represents what can be withdrawn each year. This 2% figure could be very conservative – if EA donors can earn higher returns than say an 80% equity portfolio, as they have historically, then it’ll be possible to withdraw a lot more. This would make the funding overhang a lot larger. I’ve also compared a perpetual withdrawal rate with the current stock of people, but the typical career of a member will only last 30 years, so it might have been better to assume we also want to spend down the capital over 30 years. In that case, you could likely withdraw 3–4% (a typical retirement safe withdrawal rate for a 30-year retirement), which would up to double the available income.
- ↩︎
For instance, there’s a pre-existing community doing conventional biosecurity, which made it easier to turn money into progress on biosecurity despite a lack of EA community members working in the area. In contrast, there was no existing AI safety community, which has meant it has taken longer to deploy funds there.
- ↩︎
The more reason for urgency, the bigger these bottlenecks. If you’d like to see more patient philanthropy, then it might be fine just to keep all the funding invested to spend later.
- We can all help solve funding constraints. What stops us? by 18 Jun 2023 23:30 UTC; 381 points) (
- The Cost of Rejection by 8 Oct 2021 12:35 UTC; 304 points) (
- We need more nuance regarding funding gaps by 12 Feb 2022 16:57 UTC; 275 points) (
- Lessons from Running Stanford EA and SERI by 20 Aug 2021 14:51 UTC; 263 points) (
- Despite billions of extra funding, small donors can still have a significant impact by 23 Nov 2021 11:20 UTC; 166 points) (
- Rowing and Steering the Effective Altruism Movement by 9 Jan 2022 17:28 UTC; 146 points) (
- Introducing Training for Good (TFG) by 6 Oct 2021 16:59 UTC; 142 points) (
- Presenting: 2021 Incubated Charities (Charity Entrepreneurship) by 7 Oct 2021 11:06 UTC; 135 points) (
- Rethink Priorities − 2021 Impact and 2022 Strategy by 15 Nov 2021 16:17 UTC; 134 points) (
- Why your EA group should promote effective giving (and how) by 17 Feb 2022 16:02 UTC; 131 points) (
- Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits by 17 Nov 2021 18:12 UTC; 109 points) (
- In current EA, scalability matters by 3 Mar 2022 14:42 UTC; 108 points) (
- Takeaways on US Policy Careers (Part 1): Paths to Impact and Personal Fit by 26 Aug 2021 2:45 UTC; 107 points) (
- Most smart and skilled people are outside of the EA/rationalist community: an analysis by 12 Jul 2024 12:13 UTC; 107 points) (LessWrong;
- Nuclear risk research ideas: Summary & introduction by 8 Apr 2022 11:17 UTC; 103 points) (
- The Risk of Concentrating Wealth in a Single Asset by 29 Apr 2022 17:58 UTC; 97 points) (
- The optimal timing of spending on AGI safety work; why we should probably be spending more now by 24 Oct 2022 17:42 UTC; 92 points) (
- Should Earners-to-Give Work at Startups Instead of Big Companies? by 12 Nov 2021 22:55 UTC; 85 points) (
- Evidence from two studies of EA careers advice interventions by 29 Sep 2021 15:47 UTC; 84 points) (
- EA: A More Powerful Future Than Expected? by 15 Apr 2022 19:00 UTC; 82 points) (
- EA movement course corrections and where you might disagree by 29 Oct 2022 3:32 UTC; 79 points) (
- How students, groups, and community members can use funding by 11 Aug 2021 11:35 UTC; 64 points) (
- The optimal timing of spending on AGI safety work; why we should probably be spending more now by 24 Oct 2022 17:42 UTC; 62 points) (LessWrong;
- 80,000 Hours is hiring! by 20 Jan 2022 16:12 UTC; 59 points) (
- Effektiv Spenden—Fundraiser and 2022 plans! by 24 Nov 2021 15:51 UTC; 59 points) (
- Six Takeaways from EA Global and EA Retreats by 16 Dec 2021 21:14 UTC; 55 points) (
- Opportunity Costs of Technical Talent: Intuition and (Simple) Implications by 19 Nov 2021 15:04 UTC; 53 points) (
- Red teaming introductory EA courses by 30 Aug 2022 15:47 UTC; 52 points) (
- Why and how to start a pilot project in EA by 13 Jun 2022 8:02 UTC; 52 points) (
- 17 Jul 2022 22:09 UTC; 51 points) 's comment on Let’s stop saying ‘funding overhang’ by (
- EA Survey 2020 Series: Donation Data by 26 Oct 2021 15:31 UTC; 49 points) (
- Cost-effectiveness of operations management in high-impact organisations by 27 Nov 2022 10:33 UTC; 48 points) (
- The why and how of starting and running a workplace/professional group by 20 Dec 2021 12:35 UTC; 42 points) (
- Ambitious Altruistic Software Engineering Efforts: Opportunities and Benefits by 19 Nov 2021 17:55 UTC; 42 points) (LessWrong;
- Future Funding/Talent/Capacity Constraints Matter, Too by 18 Oct 2021 22:19 UTC; 34 points) (
- Rationalist Should Win. Not Dying with Dignity and Funding WBE. by 12 Apr 2022 2:14 UTC; 32 points) (LessWrong;
- 12 Aug 2021 18:52 UTC; 24 points) 's comment on Denise_Melchin’s Quick takes by (
- 26 Nov 2021 2:11 UTC; 22 points) 's comment on A Red-Team Against the Impact of Small Donations by (
- Open position: Head of Job Board at 80,000 Hours by 2 Feb 2022 15:00 UTC; 22 points) (
- EA Updates for August 2021 by 6 Aug 2021 13:21 UTC; 21 points) (
- The Altruist—Proposal for an EA Newspaper by 29 Dec 2021 10:41 UTC; 20 points) (
- Considerations for developing a Theory of Change for your workplace/professional group by 20 Dec 2021 12:35 UTC; 18 points) (
- Should young EAs really focus on career capital? by 27 Jun 2022 1:02 UTC; 16 points) (
- 9 Sep 2021 16:05 UTC; 15 points) 's comment on Apply now | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec) by (
- Be more ambitious: a rational case for dreaming big (if you want to do good) by 28 Feb 2022 18:04 UTC; 15 points) (
- 27 Nov 2021 5:57 UTC; 13 points) 's comment on Should I go straight into EA community-building after graduation or do software engineering first? by (
- Could EA be ideas constrained? by 2 Nov 2021 22:59 UTC; 13 points) (
- Expression of interest for a writer at 80,000 Hours by 14 Mar 2022 14:45 UTC; 13 points) (
- Expression of interest for a research assistant at 80,000 Hours by 3 Mar 2022 14:31 UTC; 10 points) (
- Expression of interest for a popular writing consultant at 80,000 Hours by 3 Mar 2022 14:33 UTC; 10 points) (
- 12 Jan 2022 16:03 UTC; 9 points) 's comment on Cause Area: UK Housing Policy by (
- Is working for Meta a good or bad option? by 17 Apr 2022 18:10 UTC; 9 points) (
- 15 Jan 2022 12:14 UTC; 9 points) 's comment on EA Survey 2019 Series: How many people are there in the EA community? by (
- What makes money “Committed” To EA? What makes an organization EA? by 4 Oct 2022 22:51 UTC; 8 points) (
- 9 Sep 2021 16:08 UTC; 7 points) 's comment on Apply now | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec) by (
- 4 Mar 2022 14:04 UTC; 7 points) 's comment on The Future Fund’s Project Ideas Competition by (
- 12 Dec 2021 8:22 UTC; 7 points) 's comment on Aiming for the minimum of self-care is dangerous by (
- 24 Nov 2021 21:25 UTC; 5 points) 's comment on Despite billions of extra funding, small donors can still have a significant impact by (
- 6 Dec 2021 18:38 UTC; 4 points) 's comment on EA megaprojects continued by (
- 30 Jul 2021 9:27 UTC; 3 points) 's comment on EA Survey 2020: Cause Prioritization by (
- 15 Nov 2021 22:17 UTC; 3 points) 's comment on A Model of Patient Spending and Movement Building by (
- 7 Oct 2021 20:13 UTC; 3 points) 's comment on Considerations on Compensation by (LessWrong;
- 23 Nov 2023 7:14 UTC; 2 points) 's comment on Eevee’s Quick takes by (
- 4 Aug 2021 14:39 UTC; 2 points) 's comment on How Do AI Timelines Affect Giving Now vs. Later? by (
- 7 Nov 2021 12:42 UTC; 2 points) 's comment on List of EA-related organisations by (
- 31 Jul 2021 10:36 UTC; 2 points) 's comment on A generalized strategy of ‘mission hedging’: investing in ‘evil’ to do more good by (
- 14 Dec 2021 22:47 UTC; 2 points) 's comment on [Feedback Wanted] DAF Donation Approach by (
- 14 Oct 2021 22:36 UTC; 2 points) 's comment on How valueable are external reviews? by (
- 9 Sep 2021 16:42 UTC; 1 point) 's comment on Apply now | EA Global: London (29-31 Oct) | EAGxPrague (3-5 Dec) by (
- 16 Nov 2021 3:09 UTC; 1 point) 's comment on A Model of Patient Spending and Movement Building by (
One comment regarding:
But the presence of the overhang makes them even more valuable. Finding an extra grantmaker or entrepreneur can easily unlock millions of dollars of grants that would otherwise be left invested.
If we really think that this is the case for EA / charity entrepreneurs I think we should consider the following:
We spend too little effort on recruiting entrepreneurial types in the movement. Being relatively new in the movement (coming in as an entrepreneur), I think we should foster a more entrepreneurial culture than we currently do. I know some fellow entrepreneurs that dropped out of / didn’t enter the movement because they felt EA is an intellectual endeavour with too little focus on actually doing something.
Adjacent to this argument I think that we should spend more resources on upskilling entrepreneurial EAs. Charity Entrepreneurship is doing a great job with their incubation program, but their current capacity is limited and there is definitely room for growth given the large interest in the program. In addition to this we should also encourage cheap tests of EA entrepreneurship within national / local chapters. Currently the focus is mainly on community building and running fellowships.
Entrepreneurial projects at local chapters are currently considered as nice-to-have and as a way to attract people to the community. But if Ben´s statement is true we should consider national groups as the breeding ground for entrepreneurs. They are the first part of the EA entrepreneur pipeline with a next possible step being CE´s incubation program or starting a charity right away. In this model local and national group leaders should support these aspiring entrepreneurs with advice and connections to other people in the movement.
I agree—people able to run big EA projects seem like one of our key bottlenecks right now. That was one of my motivations for writing this post, and this mini profile.
I’m especially excited about finding people who could run $100m+ per year ‘megaprojects’, as opposed to more non-profits in the $1-$10m per year range, though I agree this might require building a bigger pipeline of smaller projects.
I also agree it seems plausible that the culture of the movement is a bit biased against entrepreneurship, so we’re not attracting as many people with this skillset as we could given our current reach. I’d be keen to do more celebrating of people who have tried to start new things.
This said, it might be even more pressing simply to reach 2x as many people, and then we’ll find a bunch of founders among them.
I’d also want to be cautious about using the term ‘entrepreneur’ to describe what we’re looking for, since I think that tends to bring to mind a particular silicon valley type, which is often pretty different from the people who have succeeded running big projects in EA. E.g. classic entrepreneurship is often about quickly testing lots of things, whereas many EA projects require really good judgement. That’s why I cached it in terms of ‘people who could run big projects in EA’ (leaving it open about exactly which skills are most needed there).
To give a concrete example, I mention the example of ‘the type of person who could found CSET’ - and the skills there seem pretty different from the people who typically self-identify as entrepreneurs on HN etc.
Do you think it is useful to speculate about what these orgs could be, in any sense (cause area, purpose, etc.)?
Maybe this speculation could be useful to give some sense/hint/structure to how these orgs can be fostered (as opposed to directly encouraging someone to create such an org). For example, it may guide focus on certain smaller orgs or promoting some kind of cultural change.
To try to be helpful, here’s a sample of some founders from orgs who received the 3 largest Open Phil grants.
CSET—Jason Matheny—https://en.wikipedia.org/wiki/Jason_Gaverick_Matheny
OpenAI—Sam Altman—https://en.wikipedia.org/wiki/Sam_Altman
Malaria Consortium—Sylvia Meek—https://www.malariaconsortium.org/sylvia-meek/dr-sylvia-meek-1954-2016.htm
Indeed, at their current life stage (Sam Altman was a SV founder) these people are very different from the “move fast and break things” startup style.
Touching on @Ben_West’s comment, many of these founders seem similar in profile to founders at middle or larger size companies and also have significant scientific experience.
Matheny was a scientist and manager of research and Malaria Consortium’s founding team has multiple strong scientists. At the same time, these are people have very high human capital in the form of executive experience. Their profile seems normal for “CEOs”.
While many CEOs do have scientific degrees, the level of scientific prestige and activity among this group might be uncommon.
This pattern could be useful in some way (most obviously, you could just ask the current senior research leaders of EA aligned orgs/think tanks if they have a vision for a useful project).
This is being done here: https://forum.effectivealtruism.org/posts/ckcoSe3CS2n3BW3aT/what-ea-projects-could-grow-to-become-megaprojects
Thanks for pointing this out!
Thanks for your response Benjamin (and Ben West asking a question)
Sorry for not being completely clear about this, but I pointed towards the profile of a (EA-style) charity entrepreneur which is indeed different from the regular SV co founder (although there are similarities, but let’s not go into the details). I think the mini profile you wrote about a non profit entrepreneur is great and I am happy to see that 80k pushes this. Hopefully the Community Building Program will follow since national and local chapters are for many people the first point of entrance into EA. It would be good if this program also encouraged local and national chapters to make valuable cheap tests in non profit entrepreneurship viable.
I am also very happy that you acknowledge that reaching out to get 2x as many people in is probably desirable. Also here I think that the “common EA opinion” shifted quite a lot over the ~two years I’ve been involved in EA, great to see!
As someone who’s spent a fair amount of time with the SV startup scene (have cofounded multiple companies) and the EA scene, I’d flag that the cultures of at least these two are quite different and often difficult to bridge.
Most of the large EA-style projects I’d be excited about are ones that would require a fair amount of buy-in and trust from the senior EA community. For example, if you’re making a new org to investigate AGI safety, bio safety, or expand EA, senior EAs would care a lot about the leadership having really strong epistemics and understand of existing EA thinking on the topic.
One problem is that entrepreneurship culture can present a few challenges:
1) There’s often a lot of overconfidence and weird epistemics
2) Often there’s not much spare time to learn about EA concepts
3) Leaders often seem to grow egos
The key thing, to me, seems to be some combination of humility and willingness to begin at the bottom for a while. I think that becoming well versed in EA/longtermism enough to found something important, can often require beginning in a low-level research role or similar.
One strategy some people give is something like, “I don’t care about buy-in from the EA community, I could start something myself quickly, and raise a lot of other money”. In sensitive areas, this can get downright scary, in my opinion.
Of my current successful entrepreneur friends, I can’t see many of them going the “go low-status for a few years route”, but I could see some. Most people I know don’t seem to want to go down a few status and confidence levels for a while.
There are definitely some prominent examples in EA of people who have done similar things (I’d flag Ben West, who seems have pulled off a successful transition, and is discussed in these comments), but there aren’t all too many.
The FHI RSP program was a nice introductory program, but was definitely made more for researchers than entrepreneurs. I could imagine us having similar transitionary programs for entrepreneur-types in the future. There are probably some ways more programs and work in this area could make things easier; for instance, they could seem really prestigious (flashy branding), in part to make it more palatable for people taking status-decreases for a while.
If there are successful entrepreneurs out there reading this interested in chatting, I’d of course be happy to (just message me), though I’m sure 80k and other groups would be interested as well.
(Note: I think Charity Entrepreneurship gets around this a bit by first, focusing on younger people with potential to be entrepreneurs, rather than people who are already very successful, and second, focusing on particular interventions that can be done more independently.)
A lot of this rings true to me.
I feel like these conversations often get confusing because people mean different things by the term “entrepreneur”, so I wonder if you could define what you mean by “entrepreneur” and what you think they would do in EA?
Even with very commercializable EA projects like cellular agriculture, my experience is that the best founders are closer to scientists than traditional CEOs, and once you get to things like disentanglement research the best founders have almost no skills in common with e.g. tech company founders, despite them both technically being “entrepreneurs” in some sense.
One extra thought is that there was a longtermist incubator project for a while, but they decided to close it down. I think one reason was they thought there weren’t enough potential entrepreneurs in the first place, so the bigger bottleneck was movement growth rather than mentoring. I think another bottleneck was having an entrepreneur who could run the incubator itself, and also a lack of ideas that can be easily taken forward without a lot more thinking. (Though I could be mis-remembering.)
I think they were pretty low profile, and the types of things that Jan-WillemvanPutten is suggesting are about being more present/visible in EA in order to attract a subculture to develop more. I think this example supports his main point more actually, because movement growth is quite driven by culture and attractors for different subcultures.
(As an aside, I was engaged with the longtermist incubator and found it helpful/useful.)
(Another aside, I can think of a few downsides of Jan-WillemvanPutten’s specific suggestion, but I think the important part is the visibility and culture building aspect.)
Agree that this seems neglected. EA Germany (and I personally) are happy to support EA projects that have potential to grow into impactful EA organisations. If you have ideas on how to better do that (within the limited capacity of national group organisers), feel free to get in touch!
(I also agree on the importance of having founders that are value-aligend and have good epistemics, which I think some entrepreneurs are but many others may not be)
An extra thought is that this seems like a positive update on the cost-effectiveness of past meta work.
Here’s a rough and probably overoptimistic back of the envelope to illustrate the idea:
I’d guess that maybe $50m was spent on formal movement building efforts in 2020. This is intended to include things like OP & GiveWell’s spending on staff, most of FHI and MIRI, plus all of the explicit movement building orgs like CEA and 80k. If that started at 0 in 2010, then it might add up to $250m over the decade (assuming straight line growth).
If the average cost of employing someone was $50k, that would imply about 5000 person-years of work were invested.
Let’s assume that formal movement building efforts receive 1⁄3 of the credit for resources raised (with the other 2⁄3 going to informal efforts like personal connections and to the original founders).
Then, the formal efforts have ‘raised’ $15bn and 146,000 person-years (assuming 20yr per person & growth of 7300 people).
So that’s an average return of 60 dollars per 1 dollar in, and 29 person years per 1 year in. You also get a lot of research thrown in for free.
Individual projects will vary a lot, but that’s a nice base rate!
For reference, the Bill & Melinda Gates Foundation is the second largest charitable foundation in the world, holding $49.8 billion in assets.
Though also note that most of Gates’ and Buffet’s wealth hasn’t yet been put into the foundation.
What’s the largest?
https://en.wikipedia.org/wiki/List_of_wealthiest_charitable_foundations
Apparently its the Novo Nordisk Foundation, which owns a Danish pharma company that sells diabetes medication.
Thanks a lot for the thorough post! I found it really helpful how you put rough numbers on everything, and made things concrete, and I feel like I have clearer intuitions for these questions now.
My understanding is that these considerations only apply to longtermists, and that for people who prioritise global health and well-being or animal welfare this is all much less clear, would you agree with that? My read is that those cause areas have much more high quality work by non EAs and high quality, shovel ready interventions.
I think that nuance can often get lost in discussions like this, and I imagine a good chunk of 80K readers are not longtermists, so if this only applies to longtermists I think that would be good to make clear in a prominent place.
And do you have any idea how the numbers for total funding break down into different cause areas? That seems important for reasoning about this.
Hey, I agree the situation is more unclear outside of longtermism and meta (I flag that a couple of times). It’s also pretty complicated, so I didn’t want to put it in the post and hold up publication.
Here are some quick thoughts:
The money available to global health and animal welfare has grown a lot as well (perhaps 2-3x) - e.g. see the comment below about Moskovitz, but it’s not just that—so a similar dynamic could apply.
Focusing on global health, there is a funding gap for GiveWell-recommended charities more effective than GiveDirectly, though this gap seems more filled than in the past. If you look at the RFMF estimates, additional donations now mostly go towards operations in ~3 years time rather than the next 1-2 years. And you’d want to think about things like Vitalik’s recent$54m donation.
My guess is that for interventions more cost-effective than current GiveWell-recommended charities (excluding GiveDirectly), there is a big funding overhang (in the sense that GiveWell/OP would happily fund a lot more of this stuff asap if it existed).
For interventions similar to the GiveWell top charities, it’s fairly balanced.
And then if you’re OK to drop cost-effectiveness by 10-20x, GiveDirectly could ultimately absorb billions, so there’s still a funding gap there.
I also still think that many people working on global health could have more impact via jobs in research, policy, nonprofit entrepreneurship etc. than through earning to give.
Turning to animal welfare, Lewis Bollard said in our 2017 podcast that animal welfare seemed to have more of a funding overhang / be more talent constrained, and this wasn’t only driven by OP entering the space.
If the growth in funding has kept pace with the growth of people working on animal welfare, then the size of the overhang should be even bigger today in absolute terms.
Moreover, many animal welfare people are now focused on clean meat, which often doesn’t need philanthropic funding in the first place.
On the other hand, many animal welfare non-profits still seem more likely to say that if they had more funding, they’d hire a bunch more people, and salaries also seem fairly low. I’m not sure exactly what’s going on there.
(For context I work in an org where currently >50% of researchers do subcause prioritization within animal welfare, though I’m not an animal welfare researcher myself. Speaking only for myself, just one person’s take, etc, etc)
Some quick personal thoughts on animal welfare funding vs talent constraints:
Naively I would guess that the median research hire in animal welfare for RP would contribute >$1 million/year of counterfactual value solely in terms of improving the quality and quantity of grantmaker decisions within animal welfare. For example, I would naively ascribe somewhat higher numbers for Neil’s EU work, or if quality of the moral weights work is improved by the equivalent of additional thinking of a median researcher-year.
(Note that this is fairly BOTEC and there are obvious biases for someone to think that their work and that of their coworkers is especially important).
I think this is wrong or at least may be easy to misinterpret for the typical reader. The biggest bottleneck I see for clean meat is strategic clarity along the lines of “can we honestly and accurately have a coherent roadmap, enough to persuade funders and others that the research we’re currently doing is on the pathway to eventually making cost-competitive clean meat that will have widespread regulatory and consumer adoption” (A coherent and technically sound response to Humbird 2020 is necessary but not sufficient here).
But if you’re convinced that current work in clean meat is on the path to producing clean meat with the relevant desiderata, then in those worlds clean meat is much more bottlenecked by funding than by technical talent (compared to say research of malaria vaccines or universal viral sequencing). As a sanity check, all the cultured meat research in academia probably looks like <$10million/year(?), and maybe another !1.5 orders of magnitude more in industry, well over an order of magnitude less than the valuation of a single plant-based startup alone.
So I’d expect in worlds where clean meat research is tractable, I’d expect philanthropic funding to be significant in improving the field, eg by drawing people away from plant-based meats, biopharma, yeast germline manufacture, etc. I suspect (though I don’t have direct evidence) that many people in the latter two categories would take 30% pay cuts to work in something that’s technically interesting and with high potential altruistic value like clean meat, but not >75% pay cuts.
It is confusing but my guess is that Benjamin Todd meant plant based meat, for the reasons you indicate (size of the recently popular PBM industry, where recent valuations of a single company is many times all funding in FAW, as opposed to in vitro lab grown meat, which is much farther away from commercialization).
Yes, sorry I was thinking of meat substitutes broadly. I agree clean meat is more funding constrained than plant based meat, because it’s further from commercialisation.
Hmm yeah maybe(?) he just misspoke. I do think “clean meat” usually refers to in vitro lab grown meat rather than “all meat alternatives”, both within EA and more broadly, so if clean meat was a standin for PBM I’d stand by my assertion “may be easy to misinterpret for the typical reader”
FWIW I looked into PBM much less than clean meat but I would guess it would be overconfident to assume that replacing all (or most) slaughter-based meat via scaling up existing systems is inevitable and I would guess progress is at least somewhat amenable to philanthropic funding, though not necessarily on parity with top farmed animal welfare interventions like corporate campaigns.
Btw, I too find myself confused about this point by Benjamin_Todd and also am not sure exactly what’s going on here.
I think Benjamin_Todd is saying that
There is currently “room for funding” in (farmed) animal welfare, maybe specifically in talent and salaries
There was a reported overhang of funding in farmed animal welfare. Extrapolating from growth in Good Ventures, this overhang could even have increased
1 and 2 seems to be a contradiction
Some quick thoughts of mine that may be low quality:
I know some people in the farmed animal welfare space and funding is being thoughtfully deployed and there is attention to talent and appropriate compensation.
There’s a lot of actual on the ground, operational activity in animal welfare, compared to meta or longtermist cause areas. In my personal bias/perspective/worldview, this activity is inherently less cohesive and produces noise and this is normal. This noise described above can make it a little harder to get signal about funding gaps
Increasing salaries or significantly improving the stream of talent are inherently delicate and slow processes involving changes in culture
I think 2017 is a long time in the EA movement. It seems reasonable to get newer information about funding. Note that clearly 80,000 hours has hosted important leaders in farmed animal welfare since 2017.
I’m more sure that actual on the ground work, operations and implementation, is precious and can be hard to communicate or make visible.
+1
I think I often hear longtermists discuss funding in EA and use the 22 Bil number from OpenPhilanthropy. And I think people often make some implicit mental move thinking that’s also the money dedicated to longtermism- even though my understanding is very much that that’s not all available to longtermism.
In the recent podcast with Alexander Berger, he estimates it’ll be split roughly 50:50 longtermism vs. global health and wellbeing.
This means that the funding available to global health and wellbeing has also grown a lot too, since Dustin Moskovitz’s net worth has gone from $8bn to $25bn.
If this is true, why not spend way more on recruiting and wages? It’s surprising to me that the upper bound could be so much larger than equivalent salary in the for-profit sector.
I might be missing something, but it seems to me the basic implication of the funding overhang is that EA should convert more of its money into ‘talent’ (via Meta spending or just paying more).
This is a big topic, and there are lots of factors.
One is that paying very high salaries would be a huge PR risk.
That aside, the salaries are many orgs are already good, while the most aligned people are not especially motivated by money. My sense is that e.g. doubling the salaries from here would only lead to a small increase in the talent pool (like maybe +10%).
Doubling costs to get +10% labour doesn’t seem like a great deal—that marginal spending would be about a tenth as cost-effective as our current average. (And that’s ignoring the PR and cultural costs.)
Some orgs are probably underpaying, though, and I’d encourage them to raise salaries.
This kind of ambivalent view of salary-increases is quite mainstream within EA, but as far as I can tell, a more optimistic view is warranted.
If 90% of engaged EAs were wholly unmotivated by money in the range of $50k-200k/yr, you’d expect >90% of EA software engineers, industry researchers, and consultants to be giving >50%, but much fewer do. You’d expect EAs to be nearly indifferent toward pay in job choice, but they’re not. You’d expect that when you increase EAs’ salaries, they’d just donate a large portion on to great tax-deductible charities, so >75% of the salary increase would be refunded on to other effective orgs. But when you say that the spending would be only a tenth as effective (rather than ~four-tenths), clearly you don’t.
Although some EAs are insensitive to money in this way, 90% seems too high. Rather, with doubled pay, I think you’d see some quality improvements from an increased applicant pool, and some improved workforce size (>10%) and retention. Some would buy themselves some productivity and happiness. And yes, some would donate. I don’t think you’d draw too many hard-to-detect “fake EAs”—we haven’t seen many so far. Rather, it seems more likely to help quality than hurt on the margin.
I don’t think the PR risk is so huge at <$250k/yr levels. Closest thing I can think of is commentary regarding folks at OpenAI, but it’s a bigger target, with higher pay. If the message gets out that EA employees are not bound to a vow of poverty, and are actually compensated for >10% of the good they’re doing, I’d argue that’s would enlarge and improve the recruitment pool on the margin.
(NB. As an EA worker, I’d stand to gain from increased salaries, as would many in this conversation. Although not for the next few years at least given the policies of my current (university) employer.)
[Predictable disclaimers, although in my defence, I’ve been banging this drum long before I had (or anticipated to have) a conflict of interest.]
I also find the reluctance to wholeheartedly endorse the ‘econ-101’ story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:
EA-land tends sympathetic using ‘econ-101’ accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we’d need persuading to depart greatly from them.
Considerations why ‘econ-101’ won’t (significantly) apply here don’t seem to extend to closely analogous cases: we don’t fret (and typically argue against others fretting) about other charity’s paying their staff too much, we don’t think (cf. reversal test) that google could improve its human capital by cutting pay—keeping the ‘truly committed googlers’, generally sympathetic to public servants getting paid more if they add much more social value (and don’t presume these people are insensitive to compensation beyond some limit), prefer simple market mechs over more elaborate tacit transfer system (e.g. just give people money) etc. etc.
The precise situation makes the ‘econ-101’ intervention particularly appetising: if you value labour much more than the current price, and you are sitting atop a ungodly pile of lucre so vast you earnestly worry about how you can spend big enough chunks of it at once, ‘try throwing money at your long-standing labour shortages’ seems all the more promising.
Insofar as it goes, the observed track record looks pretty supportive of the econ-101 story—besides all the points Ryan mentions, compare “price suppression results in shortages” to the years-long (and still going strong) record of orgs lamenting they can’t get the staff.
Perhaps the underlying story is as EA-land is generally on the same team, one might hope you can do better than taking one’s cue from ‘econ-101’, given the typically adversarial/competitive dynamics it presumes between firms, and employee/employer. I think this hope is forlorn: EA-land might be full aspiring moral saints, but aspiring moral saints remain approximate to homo economicus. So the usual stories about the general benefits econ efficiency prove hard to better- and (play-pumps style) attempts to try feel apt to backfire (1, 2, 3, 4 - ad nauseum).
However, although I don’t think ‘PR concerns’ should guide behaviour (if X really is better than ¬X, the costs of people reasonably—if mistakenly—thinking less of you for doing X is typically better than strategising to hide this disagreement), many things look bad because they are bad.
In the good old days, I realised I was behind on my GWWC pledge so used some of my holiday to volunteer for a week of night-shifts as a junior doctor on a cancer ward. If in the future my ‘EA praxis’ is tantamount to splashing billionaire largess on a lifestyle for myself of comfort and affluence scarcely conceivable to my erstwhile beneficiaries, spending my days on intangible labour in well-appointed offices located among the richest places heretofore observed in human history, an outside observer may wonder what went wrong.
I doubt they would be persuaded by my defence is any better than obscene: “Not all heroes wear capes; some nobly spend thousands on yuppie accoutrements they deem strictly necessary for them to do the most good!”. Nor would they be moved by my remorse: self-effacing acknowledgement is not expiation, nor complaisance to my own vices atonement. I still think jacking up pay may be good policy, but personally, perhaps I should doubt myself too.
I’m just saying that when we think offering more salary will help us secure someone, we generally do it. This means that further salary raises seem to offer low benefit:cost. This seems consistent with econ 101.
Likewise, it’s possible to have a lot of capital, but for the cost-benefit of raising salaries to be below the community bar (which is something like invest the money for 20yr and spend on OP’s last dollar—which is a pretty high bar). Having more capital increases the willingness to pay for labour now to some extent, but tops out after a point.
To be clear, I’m sympathetic to the idea that salaries should be even higher (or we should have impact certificates or something). My position is more that (i) it’s not an obvious win (ii) it’s possible for the value of a key person to be a lot higher than their salary, without something going obviously wrong.
I definitely agree EAs are motivated somewhat by money in this range.
My thought is more about how it compares to other factors.
My impression of hiring at 80k is that salary rarely seems like a key factor in choosing us vs. other orgs (probably under 20% of cases). If we doubled salaries, I expect existing staff would save more, donate more, and consume a bit more; but I don’t think we’d see large increases in productivity or happiness.
My impression is that this is similar at other orgs who pay similarly to us. Some EA orgs still pay a lot less, and I think there’s a decent chance this is a mistake – though you’d need to weigh it against the current cost-effectiveness of the project.
I think the PR risks for charities paying high salaries are pretty big—normal people hate the idea of charities paying a lot. Paying regular employees $200k in London would make them higher paid than the CEOs of most regular charities, including pretty big ones where the staff are typically middle aged. EA has also had a lot of kudos from the ‘living on not very much to donate’ meme. Most people aiming to do good are assumed to be full of shit, and living on not very much is a hard-to-fake symbol that shows you’re morally serious. I agree that meme has some serious downsides relative to ‘you can earn a bunch of money doing good’ meme, but giving up that kudos is a major cost – which makes the trade off ambiguous to me. Maybe it’s possible to have some of both by paying a lot but having some people donate most of it, or maybe you get the worst of both worlds.
Agree that we shouldn’t expect large productivity/wellbeing changes. Perhaps a ~0.1SD improvement in wellbeing, and a single-digit improvement in productivity—small relative to effects on recruitment and retention.
I agree that it’s been good overall for EA to appear extremely charitable. It’s also had costs though: it sometimes encouraged self-neglect, portrayed EA as ‘holier than thou’, EA orgs as less productive, and EA roles as worse career moves than the private sector. Over time, as the movement has aged, professionalised, and solidified its funding base, it’s been beneficial to de-emphasise sacrifice, in order to place more emphasis on effectiveness. It better reflects what we’re currently doing, who we want to recruit, too. So long as we take care to project an image that is coherent, and not hypocritical, I don’t see a problem with accelerating the pivot. My hunch is that even apart from salaries, it would be good, and I’d be surprised if it was bad enough to be decisive for salaries.
I think there are a few other considerations that may point in the direction of slightly higher salaries (or at least, avoiding very low salaries). EA skews young in age as a movement, but this is changing as people grow up ‘with it’ or older people join. I think this is good. It’s important to avoid making it more difficult for people to join/remain involved who have other financial obligations that come in a little later in life, e.g.
- child-rearing
- supporting elderly parents
Relatedly, lower salaries can be easier to accept for longer for people who come from wealthier backgrounds and have better-off social support networks or expectations of inheritance etc (it can feel very risky if one is only in a position to save minimally, and not be able to build up rainy day funds for unexpected financial needs otherwise).
I agree in principal, but in this case the alternative is eliminating$400k-4M of funding, which is much more expensive than doubling the salary of e.g. a research assistant.
To be clear, I am more so skeptical of this valuation than I am actually suggesting doubling salaries. But conditional on the fact that one engaged donor entering the non-profit labor force is worth >$400k, seems like the right call.
Not sure I follow the maths.
If there are now 10 staff, each paid $100k, and each generating $1m of value p.a., then the net gain is $10m - $1m = $9m. The CBR is 1:9.
If we double salaries and get one extra staff member, we’re now paying $2.2m to generate $11m of value. The excess is $8.8m. The average CBR has dropped to 5:1, and the CBR of the marginal $1.2m was actually below 1.
Agreed, just a function of how many salaries you assume will have to be doubled alongside to fill that one position
(a) Hopefully, doubling ten salaries to fill one is not a realistic model. Each incremental wage increase should expand the pool of available labor. If the EA movement is labor-constrained, I expect a more modest raise would cause supply to meet demand.
(b) Otherwise, we should consider that the organization was paying only half of market salary, which perhaps inflated their ‘effectiveness’ in the first place. Taking half of your market pay is itself an altruistic act, which is not counted towards the org’s costs. Presumably if these folks chose that pay cut, they would also choose to donate much of their excess salary (whether pay raise from this org, or taking a for-profit gig).
On b), for exactly that reason, our donors at least usually focus more on the opportunity costs of the labour input to 80k rather than our financial costs—looking mainly at ‘labour out’ (in terms of plan changes) vs. ‘labour in’. I think our financial costs are a minority of our total costs.
On a), yes, you’d need to hope for a better return than a doubling leads to +10% labour estimate I made.
If we suppose a 20% increase is sufficient for +10% labour, then the new situation would be:
Total costs: $1.32m
Impact: $11m
So, the excess value has increased from $9m to $9.7m, and the CBR of the marginal $320k is about 1:3. So, this would be worth doing, though the cost-effectiveness is about a third of before. (In our case at least, I don’t think a +20% increase to salaries would lead to +10% more hires though.)
It looks like the breakeven point is roughly an 80% increase in salaries to gain 10% of labour with this simplified model. (I.e. the CBR of the marginal $1m is around 1:1). In reality I don’t think we’d want to go that close to the breakeven point—because there may be better uses of money, due to the reputation costs of unusually high salaries, and because salaries are harder to lower than to raise (and so if uncertain, it’s better to undershoot).
Good points, I agree it would be better to undershoot.
Still, even with the pessimistic assumptions, the high end of that $0.4-4M range seems quite unlikely.
Does 80k actually advise people making >$1M to quit their jobs in favor of entry-level EA work? If so, that would be a major update to my thinking.
It depends on what you mean by ‘entry level’ & relative fit in each path, but the short answer is yes.
If someone was earning $1m per year and didn’t think that might grow a lot further from there, I’d encourage them to seriously consider switching to direct work.
I.e. I think it would be worth doing a round of speaking to people at the key orgs, making applications and exploring options for several months (esp insofar as that can be done without jeopardising your current job). Then they could compare what comes up with their current role. I know some people going through this process right now.
If someone was already doing direct work and doing well, I definitely wouldn’t encourage them to leave if they were offered a $1m/year earning to give position.
The issue for someone already in earning to give is that probability that they can find a role like that which is a good fit for them, which is a long way from guaranteed.
That all seems reasonable.
Shouldn’t the displacement value be a factor though? This might be wrong, but my thinking is (a) the replacement person in the $1M job will on average give little or nothing to effective charity (b) the switcher has no prior experience or expertise in non-profit, so presumably the next-best hire there is only marginally worse?
The estimates are aiming to take account of the counterfactual i.e. when I say “that person generates value equivalent to extra donations of $1m per year to the movement”, the $1m is accounting for the fact that the movement has the option to hire someone else.
In practice, most orgs are practicing threshold hiring, where if someone is clearly above the bar, they’ll create a new role for them (which is what we should expect if there’s a funding overhang).
I made a mistake in counting the number of committed community members.
I thought the Rethink estimate of the number of ~7,000 ‘active’ members was for people who answered 4 or 5 out of 5 on the engagement scale in the EA survey, but actually it was for people who answered 3, 4 or 5.
The number of people who answered 4 or 5 is only ~2,300.
I’ve now added both figures to the post.
Hi there!
I’m a bit confused about the claim that the bottleneck is ways to deploy funding rather than funding itself.
In global poverty and health cause areas for example, there are highly scalable EA-endorsed interventions like insecticide treated bed nets, deworming and cash transfers, and there are still plenty of people with malaria, children to deworm, and folks below the poverty line who could receive cash transfers. As far as I’m aware, AMF, Deworm the World / SCI and GiveDirectly could deploy more funds, and to the extent that they needed to hire more people to do so, I hypothesise they would be able to easily given that, as I understand it, there is a lot of competition to get jobs at organisations like these. What am I missing?
Thanks in advance!
Hi Aidan, the short answer is that global poverty seems the most funding constrained of the EA causes. The skill bottlenecks are most severe in longtermism and meta e.g. at the top of the ‘implications section’ I said:
That said, I still thinking global poverty is ‘talent constrained’ in the sense that:
If you can design something that’s several-fold more cost-effective than GiveDirectly and moderately scalable, you have a good shot of getting a lot of funding. Global poverty is only highly funding constrained at the GiveDirectly level of cost-effectiveness.
I think people can often have a greater impact on global poverty via research, working at top non-profits, advocacy, policy etc. rather than via earning to give.
Thank you for your response! Makes sense. I’m not 100% convinced on the last point, but a few of your articles and 80k podcast appearances have definitely shifted me from thinking that E2G is unambiguously the best way for me to maximise the amount of near-term suffering I can abate, to thinking that direct work is a real contender. So thanks!!
We can take my estimates of the drop out rate to make an estimate of the equilibrium size of the movement.
If ~3% of the more engaged people drop out each year, and the flow of new members stays constant at ~300 per year (14% of 2300), then the number of highly engaged members will tend towards 10,000, which is 4-fold growth from today.
If the ratios stay the same, the number of people at the slightly broader definition of membership will tend towards 30,000.
This process will take ~25 years.
We’ll hopefully be able to grow the flow of people entering in that time(!). If we double the rate of new people entering each year, then we’ll double the long-term equilibrium size.
This also assumes that the drop out rate doesn’t change, but large changes are likely depending on fashion. I’ve also seen evidence that drop out rates tend to decline as people have been involved for longer, though on the other hand, eventually people will start to retire, increasing the drop out rate.
Readers might be interested this twitter thread on megaprojects, and forum discussion of ideas.
This comment is a generic, low information poke at the excellent article:
One of the takeaways of this article is that there has been a dramatic expansion in EA funding, increasing the overhang of “money” (over “talent”).
I think this reasonably creates the impression that EA funding is now very abundant.
I’m interested or worried about the unintended side effects of this impression:
For an analogy, imagine making a statement about the EA movement needing more “skill in biology”. In response, this updates conscientious, strong EAs who change careers. However, what was actually needed was world class leaders in biology whose stellar careers involve special initial conditions. Unfortunately, this means that the efforts made by even very strong EAs were wasted.
I think such misperceptions can occur unintentionally. This motivates this comment.
With this motivation, it might be useful to interrogate these statements to try to get at the “qualia” or less tangible character behind the impression given of the new funding.
I’m not sure how to do this interrogation well. I’ve written speculative and likely unfairly aggressive scenarios about side effects from this impression of funding:
It undermines Earning to Give efforts that can give very large and hidden value to the movement (development of deep operational skills and benevolent coordination among EAs in industry, government and policy makers)
The concentration undermines nimbler funding of smaller, nascent organizations. To explain Open Phil funding can be hard to get (this is offset by thoughtful, generous creation of orgs such as EA Funds). These issues may increase with greater centralization, and also the perception of ample money may undermine the funding of small orgs.
The major new driver of funding is related to cryptocurrency. There are few industries as volatile or uncertain. While you specifically flag this, I’m worried this will be buried by the “top line” statement which reads EA funding has increased—if this turns out not to be true, choices and actions may have been made that decrease access to funding.
This concern is slightly different and related to alignment: there may shifts in the EA movement as a result of new, very large funders. This concern increases due to the unusually conscientious, modest nature of the EA community who update readily, and because new funders may be focus in relatively few and less established cause areas. While we should expect some cultural change as a result of a major increase of funding, these factors may make the community unstable or reduce cohesion.
Normal concerns about diversity of funding.
My first sentence, about this comment being low information and maybe unfounded, was not rhetorical.
I am happy to be completely wrong in every way!
It’s an absolute good if EAs are updated by changing resources, and articles like are invaluable.
Does anyone else have any comments about this?
Agree it’s good to think about these things. Our past messaging wasn’t nuanced enough—I tried to correct for those issues in the main post, but there are probably going to be new messaging issues.
One quick comment is that I’m pretty worried about issues in the opposite direction e.g. that people aren’t being ambitious enough:
Most EA orgs are designed to use at most tens of millions of dollars per year of funding, but we should be trying to think of projects that could deploy $100m+ per year.
This doesn’t immediately strike me as a bad outcome, ex-ante. It’s very hard to know (1) who will become world class researchers or (2) if non-world-class people move the needle by influencing the direction of their field ever-so-slightly (maybe by increasing the incentives to work on an EA-problem by increasing citations here, peer-reviewing these papers, etc.). I, by no means, am world class, but I’ve written papers that (I hope) pave the way for better people to work on animal welfare in economics; participate in and attend conferences on welfare economics; signed a consensus statement on research methodology in population ethics; try to be a supportive/encouraging colleague of welfare-economists working on GPR topics; etc. I also worked under a world-class researcher in grad school and now sometimes serve as a glorified assistant (i.e., coauthor) who helps him flesh out and get more of his ideas to paper. In your example, if the community ‘needs more people in biology’ I think the scaffolding of the sorts I try to provide, is probably(?) still impactful. (Caveat: I’m almost certainly over-justifying my own impact, so take this with a grain of salt.)
If 80K was pushing people into undesirable careers with little earnings potential, this might be a legitimate problem. But I think most of the skills built in these HITS based careers are transferrable and won’t leave you in a bad spot.
Hi Kevin!
I saw your excellent posts as an economics professor and also cutting WIFI.
Both were great. It’s great to hear from your perspective as an economics professor and hear about your work!
Also, thanks for your comment. I think I get what you’re saying:
(It’s not clear why anyone should listen to my opinions about their life choices) but yes, it seems perfectly valid to go into any discipline, and you can have a huge value and generate impact in many paths of life.
Also, there’s a subthread here about elitism that is difficult to unpack, but it seems healthy to discuss “production functions”, skill and related worldviews explicitly at some point.
To be frank, by giving my narrative example, I was trying to touch on past messaging issues that actually happened.
These messaging issues are alluded in this article, also by Benjamin Todd:
https://80000hours.org/2018/11/clarifying-talent-gaps/
Basically, the problem is as suggested in my example—in the past, the need for very specific skills or profiles was misinterpreted as a need for general talent. This did result in bad outcomes.
I chose to give my narrative instead of directly pointing to a past instance of the issue.
By doing this, I hoped to be more approachable to those less familiar with the history. It is also less confrontational while making the same point.
Thanks for writing back—and for the unnecessary complements of my inaugural posts :) -- Charles! I only know the context of mis-messaging around skills at a high level, so it is hard for me to respond without knowing what ‘bad outcomes’ look like. I don’t doubt that something like this could happen, so I now see the point you were trying to make.
I was responding as someone who read your (intentionally not fleshed out) hypothetical and thought the appropriate response might actually be for someone well-suited for ‘biology’ to work on building those broad skills even with a low probability of achieving the original goal.
edit: no longer relevant since OP has been edited since. (Thanks!)
(emphasis mine)
Just to clarify, that’s the EV of the path per year, right?
I assume this is also per year?
Clarifying because I think numbers like this are likely to be quoted/vaguely remembered in the future, and it’s easy to miss the per year part.
Yes, they’re all per year. I’ll add them.
David Goldberg adds on linkedin that FP pledge value is now $5.7bn, rather than $3.1bn in the table (I was using an old figure).
If we use the 25% to EA-aligned charities figure, that would be $1.4bn NPV rather than $0.8bn.
That 25% figure is also especially uncertain for FP. It perhaps be 2.5% to 50%.
Really liked this post, thanks.
Minor comment, wanted to flag that I think “Open Philanthropy has also reduced how much they donate to GiveWell-recommended charities since 2017.” was true through 2019, but not in 2020, and we’re expecting more growth for the GW recs (along with other areas) in the future.
Thanks! I probably should have just used the 2020 figure rather than the 2017-2019 average.
My estimate was an $80m allocation by Open Phil to global health, but this would suggest $100m.
I find this sort of posts very useful and interesting. Thanks for writing it.
It would be great to have a similarly detailed and well-judged post on the growth of AI safety (including AI governance, etc). Seb Farquhar published a good post on that topic in 2017, demonstrating very rapid growth, but I haven’t seen anything similar since (please correct me if I’m wrong).
I think that that question would count Sam Bankman-Fried starting to give at the scale Good Ventures is giving as a positive resolution, and that some forecasters have that as a key consideration for their forecast (e.g., Peter Wildeford’s comment suggests that). Whereas I think you’re using this as evidence that there’ll be another donor at that scale, in addition to both Good Ventures and the FTX team people? So this might be double-counting?
(But I only had a quick look at both the Metaculus question and the relevant section of your post, so I might be wrong).
Ah good point. I only found the metaculus questions recently and haven’t thought about them as much.
Thanks, this data is really helpful—and it also is reassuring to know that people in the EA community are on top of this stuff. I would be disappointed if no one was.
I’m curious as to how the 3% per year number could be justified (via models, rather than by aggregating survey answers). It seems to me that it should be substantially higher.
Suppose you have my timelines (median 2030). Then, intuitively, I feel like we should be spending something like 10% per year. If you have 2055 as your median, then maybe 3% per year makes sense...
EXCEPT that this doesn’t take into account interest rates! Even if we spent 10% per year, we should still expect our total pot of money to grow, leaving us with an embarrassingly large amount of money going to waste at the end. (Sure, sure, it wouldn’t literally go to waste—we’d probably blow it all on last-ditch megaprojects to try to turn things around—but these would probably be significantly less effective per dollar compared to a world in which we had spread out our spending more, taking more opportunities on the margin over many years.) And if we spent 3%...
Idk. I’m new to this whole question. I’d love for people to explain more about how to think about this.
It’s a very difficult question. 3% was just the median. IIRC the upper quartile was more like 7%, and some went for 10%.
The people who gave higher figures usually either: (i) had short AI timelines—like you suggest (ii) believe there will be lots of future EA donors—so current donors should give more now and hope future donors can fill in for them.
For the counterargument, I’d suggest our podcast with Phil Trammel and Will on whether we’re at the hinge of history. Skepticism about the importance of AI safety and short AI timelines could also be an important part of the case (e.g. see our podcast with Ben Garfinkel).
One quick thing is that I think high interest rates are overall an argument for giving later rather than sooner!
I hadn’t even taken into account future donors; if you take that into account then yeah we should be doing even more now. Huh. Maybe it should be like 20% or so. Then there’s also the discount rate to think about… various risks of our money being confiscated, or controlled by unaligned people, or some random other catastrophe killing most of our impact, etc.… (Historically, foundations seem to pretty typically diverge from the original vision/mission laid out by their founders.)
I’ve read the hinge of history argument before, and was thoroughly unconvinced (for reasons other people explained in the comments).
Hmmm, toy model time: Suppose that our overall impact is log(whatwespendinyear2021)+log(whatwespendinyear2022)+Log(whatwespendinyear2023)… etc. up until some year when existential safety is reached or x-risk point of no return is passed.
Then is it still the case that going from e.g. a 10% interest rate to a 20% interest rate means we should spend less in 2021? Idk, but I’ll go find out! (Since I take this toy model to be reasonably representative of our situation)
I highly recommend the Founder’s Pledge report on Investing to Give. It goes through and models the various factors in the giving-now vs giving-later decision, including the ones you describe. Interestingly, the case for giving-later is strongest for longtermist priorities, driven largely by the possibility that significantly more cost-effective grants may be available in the future. This suggests that the optimal giving rate today could very well be 0%.
I think it’s implausible that the optimal giving rate today could be 0%. This is because many giving opportunities function as a form of investment, and we’re pretty sure that the best of those outperform the financial market. (I wrote more about ~this in this post: https://forum.effectivealtruism.org/posts/Eh7c9NhGynF4EiX3u/patient-vs-urgent-longtermism-has-little-direct-bearing-on )
Hi Owen, even if you’re confident today about identifying investment-like giving opportunities with returns that beat financial markets, investing-to-give can still be desirable. That’s because investing-to-give preserves optionality. Giving today locks in the expected impact of your grant, but waiting allows for funding of potentially higher impact opportunities in the future.
The secretary problem comes to mind (not a perfect analogy but I think the insight applies). The optimal solution is to reject the initial ~37% of all applicants and then accept the next applicant that’s better than all the ones we’ve seen. Given that EA has only been around for about a decade, you would have to think that extinction is imminent for a decade to count for ~37% of our total future. Otherwise, we should continue rejecting opportunities. This allows us to better understand the extent of impact that’s actually possible, including opportunities like movement building and global priorities research. Future ones could be even better!
But the investment-like giving opportunities also preserve optionality! This is the sense in which they are investment-like. They can result in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values than if we just make financial investments now.
Thanks for the clarification, Owen! I had mis-understood ‘investment-like’ as simply having return compounding characteristics. To truly preserve optionality though, these grants would need to remain flexible (can change cause areas if necessary; so grants to a specific cause area like AI safety wouldn’t necessarily count) and liquid (can be immediately called upon; so Founder’s Pledge future pledges wouldn’t necessarily count). So yes, your example of grants that result “in more (expected) dollars held in a future year (say a decade from now) by careful thinking people who will be roughly aligned with our values” certainly qualifies, but I suspect that’s about it. Still, as long as such grants exist today, I now understand why you say that the optimal giving rate is implausibly (exactly) 0%.
If I recall correctly (and I may well be wrong), the secretary problem’s solution only applies if your utility is linear in the ranking of the secretary that you choose—I’ve never come across a problem where this was a useful assumption.
Interesting! The secretary problem does seem relevant as a model, thanks!
FWIW, many of us do think that. I do, for example.
Thanks Wayne, will read!
That toy model is similar to Phil’s, so I’d start by reading his stuff. IIRC with log utility the interest rate factors out. With other functions, it can go either way.
However, if your model is more like impact = log(all time longtermist spending before the hinge of history), which also has some truth to it, then I think higher interest rates will generally make you want to give later, since they mean you get more total resources (so long as you can spend it quickly enough as you get close to the hinge).
I think the discount rate for the things you talk about is probably under 1% per year, so doesn’t have a huge effect either way. (Whereas if you think EA capital is going to double again in the next 10 years, then that would double the ideal percentage to distribute.)
Will do, thanks!
(Just want to say that I did find it a bit odd that Ben’s post didn’t mention timelines to transformative AI—or other sources of “hingeyness”—as a consideration, and I appreciate you raising it here. Overall, my timelines are longer than yours, and I’d guess we should be spending less than 10% per year, but it does seem a crucial consideration for many points discussed in the post.)
Thanks for this great post, I think a must read for everyone working in the EA meta space.
Some thoughts on the following:
“I continue to think that jobs in government, academia, other philanthropic institutions and relevant for-profit companies (e.g. working on biotech) can be very high impact and great for career capital.”
I think we sometimes forget that these jobs in developing countries usually pay quite well. I wouldn’t see earning to give and working in these institutions as opposites. There are jobs that give career capital with earning to give potential ánd that have the ability to have impact (probably after some years). But we should do some more research into the most relevant roles and organisations outside of EA organisations. E.g., I would expect massive difference in expected impact potential between working for the US Ministry of Education and the US Ministry of Foreign Affairs.
I know the Effective Institutions Project works on a framework to help us make thoughtful judgments about which institutions’ decisions we should most prioritize improving as well as what strategies are most likely to succeed at improving them. But I think that is just the start: besides resources for (communication of) research on abovementioned topics, we also need to upskill EAs (and their colleagues) to make impact in these jobs and to accelerate their careers from a starter role to an impactful position. This would also enable growth of the EA movement as a whole, since there are plenty of positions giving career capital, E2G- and impact potential.
The EA Forum podcast has recorded an audio version of this post here: https://anchor.fm/ea-forum-podcast/episodes/Is-effective-altruism-growing—An-update-on-the-stock-of-funding-vs—people-e158mta
Short update on the situation: https://twitter.com/ben_j_todd/status/1561100678654672896
This reminded me of the following post, which may be of interest to some readers: Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation
Thanks for this really interesting post!
Overall I think all the core claims and implications sound right to me, but I’ll raise a few nit-picks in comments.
I agree with all that, but think that that’s a somewhat too narrow framing of how researchers can contribute to deploying these funds. I’d also highlight their ability to:
Help us sift through the existing ideas for projects, cause areas, “intermediate goals”, etc. to work out what would be high-priority/cost-effective (or even just what seems net-positive overall)
See also parts of Luke Muehlhauser’s A personal take on longtermist AI governance
Generate or sharpen insights, concepts, and/or vocabulary that can help the entrepreneurs, grantmakers, etc. do their work
E.g., as a (very new and temporary) grantmaker, I think I’ve probably done a better job because other people had previously developed the following concepts and terms and some analysis related to them:
information hazards
the unilateralist’s curse
disentanglement research
value of movement growth
talent constraints vs funding constraints vs vetting constraints
(a bunch of other things)
Maybe helping refine precise ideas for cause areas, projects, etc. (but I’m less sure what I mean by this)
(That said, I think some other people are more pessimistic than me either about how much research has helped on these fronts or how much it’s likely to in future. See e.g. some other parts of Luke’s post or some comments on What are novel major insights from longtermist macrostrategy or global priorities research found since 2015?)
I agree there are lots of forms of useful research that could feed into this, and in general better ideas feels like a key bottleneck for EA. I’m excited to see more ‘foundational’ work and disentanglement as well. Though I do feel like at least right now there’s an especially big bottleneck for ideas for specific shovel ready projects that could absorb a lot of funding.
Those looking to work at the intersection of academia, biorisk, biotech, global health/infectious disease, and philanthropic institutions may wish to look at roles at leading academic medical centers. A few years at Charité; Cleveland Clinic; one of the Harvard affiliates (e.g., the Brigham or MGH); JHU; Mayo Clinic; Toronto General; UCH in London; or another leading institution could give one some surprising flexibility to support EA projects within a well-resourced academic institution.
The following link from this week lists a number of new strategy jobs at Mayo Clinic. I suspect these roles would have career capital / impact benefits beyond what the brief job descriptions suggest. https://www.linkedin.com/feed/update/urn:li:activity:6825490031046639616/
Do you think this growth rate applies to the “Highly-Engaged EAs” classification as well, of which there were estimated to be 2,315 in the 2019 Rethink Priorities analysis?
Is this an estimate for the “Active EAs” at the end of 2020, or as of July 2021?
(Caveat to others that if you look at these estimates in Rethink Priorities initial 2019 report, you’ll find that while they are well-informed, they are quite rough, so precise estimates have limited value.)
Yes—I wasn’t trying to distinguish between the two.
Probably best to think of it as the estimate for 2020 (specifically, it’s based on the number of EA survey respondents in the 2019 survey vs. the 2020 survey).
This estimate is just based on one method. Other methods could yield pretty different numbers. Probably best to think of the range as something like −5% to 30%.
Thanks, that’s helpful info.
The article proposes that the two main ways to be engaged in EA are either a job or donating—but doesn’t mention community building. I think this could be a fundamental flaw in thinking across the EA community and 80,000 Hours (sorry if calling it a flaw hurts anyone’s feelings, but I get the impression people reading this will be okay thinking objectively about whether it’s a flaw). Community building can’t happen because of single individuals, it takes a lot of individuals working together, so I find it striking it’s not mentioned in the article since it’s very much in line with the topic.
It’s possible that EA is still in its infancy and the amount of people working in or donating to EA is minute compared to what we’ll see in 100 years, and that the most important thing for EA could be the growth of the community.
Think of the wild success (in terms of getting people to join and contribute) of something like Catholicism. What if Christ had never come back from Mount Tabor or the desert? What if he had never preached? I just bring up religion to point out how much, as a social movement, it benefited from attracting followers. If you look at Mormons, they have their members do a mandatory program where they go try and convert other people to the religion. What does EA do for community building?
After all, what is EA without the community?
PS I don’t really have evidence for whether community building is more impactful than working or donating. I could only speculate. But I don’t see it mentioned and I think there’s a bias in this EA community to not consider it (although yes it is mentioned here and there it just doesn’t seem to be a focus area from what I’ve read so far).
I agree I should have mentioned movement building as one of the key types of roles we need.
I did mention it in my later talk specifically about the implications: https://80000hours.org/2021/11/growth-of-effective-altruism/
(Maybe you already think so, but...) it probably also depends a lot on the identity of that “someone” who is donating the $X (even if we restrict the discussion to, say, potential donors who are longtermism-aligned). Some people may have a comparative advantage with respect to their ability to donate effectively such that the EV from their donation would be several orders of magnitude larger than the “average EV” from a donation of that amount.
This seems like a fairly surprising claim to me, do you have a real or hypothetical example in mind?
EDIT: Also I feel like in many such situations, such people should almost certainly become grantmakers!
Imagine that all the longtermism ~aligned people in the world participate in a “longtermism donor lottery” that will win one of them $1M. My estimate is that the EV of that $1M, conditional on person X winning, is several orders of magnitude larger for X=[Nick Bostrom] than for almost any other value of X.
[EDIT: following the conversation here with Linch I thought about this some more, and I think the above claim is too strong. My estimate of the EV for many values of X is very non-robust, and I haven’t tried to estimate the EV for all the relevant values of X. Also, maybe potential interventions that cause there to be more longtermism-aligned funding should change my reasoning here.]
Why? Do you believe in something analogous to the efficient-market hypothesis for EA grantmaking? What mechanism causes that? Do grantmakers who make grants with higher-than-average EV tend to gain more and more influence over future grant funds at the expense of other grantmakers? Do people who appoint such high-EV grantmakers tend to gain more and more influence over future grantmaker-appointments at the expense of other people who appoint grantmakers?
I doubt this is literally true fwiw. If Bostrom, a very high-status figure within longtermist EA, has really good donation opportunities to the tune of 1 million, I doubt it’d be unfunded. I also feel like there are similar analogous experiments made in the past where relatively low oversight grantmaking power was given to certain high-prestige longtermist EA figures( eg here and here). You can judge for yourself whether impact “several orders of magnitude higher” sounds right, personally I very much doubt it.
I meant “should” as a normative claim, not an empirical claim. Sorry if I miscommunicated.
Some evidence in this direction: Eliezer Yudkowsky recently wrote on a Facebook post:
This implies that all the really good funding opportunities Eliezer is aware of have already been funded, and any that appear can get funded quickly. Eliezer is not Nick Bostrom, but they’re in similar positions.
(Note: Eliezer’s Facebook post is publicly viewable, so I think reposting this quote here is ok from a privacy standpoint.)
Even ‘very high-status figures within longtermist EA’ can control a limited amount of funding, especially for requests that are speculative/weird/non-legible from the perspective of the relevant donors. I don’t know what’s the bar for “really good donation opportunities”, but the relevant thing here is to compare the EV of that $1M in the hands of Bostrom to the EV of that $1M in the hands of other longtermism aligned people.
Less importantly, you rely here on the assumption that being “a very high-status figure within longtermist EA” means you can influence a lot of funding, but the causal relationship may mostly be going in the other direction. Bostrom (for example) probably got his high-status in longtermist EA mostly from his influential work, and not from being able to influence a lot of funding.
To be clear, I don’t think my reasoning here applies generally to “high-prestige longtermist EA figures”. Though this conversation with you made me think about this some more and my above claim now seems to me too strong (I added an EDIT block).
I’m glad to cause an update! Hopefully it’s in the right direction! :)
Again, there are proxies you can look at, like what Carl Shulman donates to vs what actual winners of the donor lottery donates to. But maybe you don’t consider this much evidence, if you posit that Nick Bostrom specifically has unusually high discernment, specifically enough to donate to things in the band of activities that are “speculative/weird/non-legible” from the perspective of the relevant donors, but not speculative/weird/non-legible enough that the donor lottery administration won’t permit this.
I guess my rejoinder here is just an intuitive sense of disbelief? Several (say >=3?) orders of magnitude above 1 million gets you >1B, and as can be deduced in the figures in the post above, this is already well over the annual long-termist spending every year. If we believe that Nick Bostrom can literally accomplish much more good with 1 million than money allocated by the rest of the longtermist EA movement combined (including all money sent to FHI, where he works), isn’t this really wild? Also why aren’t we sending more money to Nick Bostrom to regrant?
(Though perhaps you came to the same conclusion by now).
I’m confused about what you’re saying here. P(B| do A) is not evidence against P(A|B), except in very rare circumstances.
My reasoning here is indeed based specifically on the track record of Nick Bostrom. (Also, I’m imagining here a theoretical donor lottery where the winner has 100% control over the money that they won.)
I was not comparing $1M in the hands of Bostrom to $1B in the hands of a random longtermism-aligned person. (The $1B would plausibly be split across many grants, and it’s plausible that Bostrom would end up controlling way more than $1M out of it.)
As an aside, without thinking about it much, it seems to me that the EV from the publication of the book Superintelligence is plausibly much higher than the total EV from everything else that was accomplished by the rest of the longtermist EA movement so far. (I can easily imagine myself updating away from that if I try to enumerate the things that were accomplished by the longtermist EA movement).
To answer this I think that the word “we” should be replaced with something more specific. Why grantmakers at longtermism-aligned grantmaking orgs don’t send more money to Bostrom to regrant? One response is that there is probably nothing analogous to the efficient-market hypothesis for EA grantmaking (see the last paragraph here). Also, the grantmakers are in an implicitly completion with each other over influence on future grant funds. A grantmaker who makes grants that are speculative, weird, non-legible or have a high probability of failing may tend to lose influence over future grant funds, and perhaps reduce the amount of future longtermist funding that their org can give.
Imagine that Bostrom uses the additional $1M to hire another assistant, or some manager for FHI, that simply results in Bostrom being a bit more productive. Looking at this from the lens of the grantmakers’ incentives, how would that $1M grant compare to the average LTFF grant?
If we estimate P(A|B) based on a correlation that we observe between A and B then the existence of a causal relationship from A to B is indeed evidence that should update our estimate of P(A|B) towards a lower value.
In the hypothetical set up, it’s in theory the same person each time – since the comparison is between the same person earning to give or working in one of these roles.
I don’t think this addresses ofer’s objection if I understand it correctly (but then again the length of our back and forth comments is maybe strong evidence against me understanding the objection correctly!)
Hi Ben. I came across this article sort of at random and wanted to weigh in.
I’m senior management at a for-profit (non-EA-affiliated) company. In principle, the idea of EA is very appealing to me. I absolutely agree that doing good “correctly” is really important. Prior to the last couple of years, I could absolutely have seen myself joining an EA org.
But over those years, as my exposure within Silicon Valley and the kind of groups that overlap heavily with EA (e.g. “rationalists”) has grown, I’ve become more reluctant to support it. Bluntly, I don’t trust groups with very high proportions (perhaps majorities?) of people who believe in some form of ‘scientific racism’ to solve the problems of the world’s most vulnerable, who are almost entirely of races they consider biologically incapable of governing themselves. Nor do I trust groups that are so eager to deny what is (to me) self-evident problems with the local culture to solve problems within other cultures (especially ones they consider inferior).
“Effective” altruism requires a notion of what “effect” is. And when I find myself surrounded by people who seem so determined to ignore the stated and clear needs of the people around them, I am concerned that the “effect” they want is not the “effect” I want. How can I trust someone who won’t see sexism right in front of him (choice of pronoun very much intentional) to not exacerbate sexism through his efforts? How can I trust someone who thinks Africans are genetic cretins to have the cultural respect to help them build a functioning society within the structures that make sense to them? Paternalism and a refusal to understand conditions on the ground is the king of all altruistic failure-modes.
I get that there’s a lot of stupid takes on the broader tech world (which I would consider EA to be a part of) and on the kind of people in it. All that “you can’t reduce feelings to numbers” nonsense is dumb. All the “white men are trying to help brown people and that’s racist” takes are dumb. I don’t want to throw the baby totally out with the bathwater. But the longer I’m around, the bigger I think these problems are, and the less welcome I feel, both for what I am and for what I believe. Management is a social discipline, and its practitioners do not particularly enjoy having the importance of social factors (which we know are vital even in very small organizations, much less in societies of millions) dismissed.
I don’t know that I have a solution to offer you here. For myself, I’m increasingly of the view that founder effects have unfortunately tainted the movement beyond repair. But maybe my voice can at least give you a sense of where some of your problems with finding people-oriented talent lie.
Appreciate you sharing why you have a negative impression of the Effective Altruism movement and aren’t interested in joining an EA org; you might be getting downvoted under the “clear, on-topic, and kind” comment guideline, but I’m not sure. In my own experience, there sure are lots of frustrating Silicon Valley memes that are overly dismissive of social factors (or of sexism and racism) out in the world, but they aren’t dominant among people actually doing direct EA-affiliated work. As a few recent examples that demonstrate a sensitivity to the importance of social factors, I enjoyed this 80,000 Hours Podcast with Leah Garcés on strategic and empathetic communication for animal advocacy and this post on surprising things learned from a year of working on policy to eliminate lead exposure in Malawi, Bostwana, Madagascar and Zimbabwe.
I’m surprised to see this so heavily downvoted—I’ve also had concerns about EA culture with regards to sex and race and I wouldn’t be surprised if it puts off people with some of the soft skills EA is missing. This comment definitely exaggerates and I’m not happy about that, but the underlying idea people who are good at navigating social dynamics are wary of EA, which is contributing to the talent gap, is pretty interesting.
Hi! Like Tessa, I appreciate you sharing your concerns about the EA movement. I downvoted because some of your criticisms seem off the mark to me. Specifically, in the two years I’ve been highly involved in EA, I haven’t heard a single person say that non-white people are “biologically incapable of governing themselves.” The scientific consensus is that “claims of inherent differences in intelligence between races have been broadly rejected by scientists on both theoretical and empirical grounds” (Wikipedia), so it seems like a bizarre thing for an EA to say. Do you mind telling us where you’ve heard someone in the EA community say this?
Sure. To take one concrete example, I know this is an explicit belief of Scott Alexander’s (author of SlateStarCodex/AstralCodexTen and major LessWrong contributor—these are the two largest specific sources of EA growth beyond generics like “blog” or 80k hours itself, per this breakdown, and were my own entry point into awareness of EA). This came out through a series of leaked emails which cite people like Steve Sailer (second link under “1. HBD is probably partially correct”) in a general defense of neoreactionaries. Yes, these emails are old, but (a) he’s made no effort to claim they’re incorrect and (b) he’s very recently defended people like Steve Hsu, who explicitly endorse HBD on the grounds that that is a valid theory that deserves space for advocacy. I also know Scott and his immediate associates personally, and their defenses of his views to me personally made no effort to pretend his views were otherwise.
When this fact came out, I was quite horrified and said as much. I assumed this would be a major shock. Instead, I was unable to find a single member of the Berkeley rationalist community who had a problem with it. I asked quite a few, and all of them (without exception) endorsed a position that I can roughly sum up as “well sure, the fact is that black people are/probably are genetically stupid, but we’re not mean and just stating a fact so it’s fine”. This included at least one person involved heavily with planning EA Global events here in the Bay Area, and included every single person I know personally who has even the loosest affiliation with EA. To my knowledge, not one of these people has a problem with explicit endorsement of the belief that black people are genetically stupider than white people.
To be clear, I don’t think that makes them insincere. I believe they believe what they’re saying, and I believe that they are sincerely motivated to make the world better. That’s why I was part of that community in the first place—the people involved are indeed very kind and pleasant in the day to day, to the point that this ugliness could hide for a long time. So I don’t think stuff like “it seems like a bizarre thing for an EA to say” applies: I think they basically think that being effective requires facts and that ‘scientific racism’ is a fact or at least probable fact. There’s nothing inconsistent about that set of beliefs, abhorrent though it is to me.
Hey, I thought this discussion could use some data. I also added some personal impressions.
These are the results of the 2020 SSC survey.
For the question “How would you describe your opinion of the [sic] the idea of “human biodiversity”, eg the belief that races differ genetically in socially relevant ways?”
20.8% answered 4 and 8.7% answered 5.
Where 1 is Very unfavorable and 5 is Very favorable
The answers look similar for 2019
Taking that at face value, 30% of Scott’s readers think favorably of “HBD”.
(I guess you could look at it as “80% of SSC readers fail to condemn scientific racism”. But that doesn’t strike me as charitable.)
From the same survey, 13.6% identified as EAs, and 33.4% answered sorta EA.
I should mention that the survey has some nonsensical answers (IQs of 186, verbal SATs of 30). And it appears that many answers lean liberal (Identifying as liberals, thinking favorably of feminism, and more open borders, while thinking unfavorably of Trump.)
A while ago, Gwern wrote
I’m trying to imagine what global development charities EAs who believe HBD donate to, and I’m having a hard time.
Assuming this implies that some EAs (1-5%?) believe in this, I would reckon they’re more focused on X-risks or animal welfare. (I don’t think this is true anymore, see comment below) It would be helpful to see how the people who identify as EAs answered this question.
Finally, from Scott’s email (which I think sharing is a horrible violation of privacy), the last sentence is emblematic of the attitude of lots of people in the community (including myself). My Goodreads contains lots of books I expect to disagree with or be offended by (Gyn/Ecology, The Bell Curve), but I still think it’s important to look into them.
Valuing new insights sometimes means looking into things no one else would, and that has been very useful for the community (fish/insect welfare, longtermism). But unfortunately, one risk is that at least some people will come out believing (outrageously) wrong things. I think that is worth it.
On a personal note, I’m black, and a community organizer, and I haven’t encountered anything but respect and love from the EA community.
Great comment!
I don’t totally follow why „the belief that races differ genetically in socially relevant ways“ would leave one to not donate to for example the Against Malaria Foundation, or Give Directly. Assuming that there for example is on average a (slightly?) lower average IQ, it seems to me that less Malaria or more money still will do most one would hope for and what the RCTs say they do, even if you might expect (slightly ?) lower economic growth potential and in the longer term (slightly?) less potential for the regions to become highly-specialized skilled labor places?
I think you’re right. I guess I took Gwen’s comment at face value and tried to figure out how development aid will look different due to the “huge implications”, which was hard.