I feel like this post mostly doesn’t talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding.
I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have.
Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this:
========
I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:
Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don’t yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
Post Product-market fit: At some point you find a product that works for people. The transition here can take a while, but by the end of it, you have customers and users banging on your door relentlessly to get more of what you have. This is the time of scaling. You don’t need to hold a tentative vision anymore, and your value proposition is clear to both you and your customers. Now is the time to hire people and scale up and make sure that you don’t let the product-market fit you’ve discovered go to waste.
I think it was Paul Graham or someone else close to YC (or maybe Ray Dalio) who said something like the following (NOT A QUOTE, since I currently can’t find the direct source):
> The early stages of an organization are characterized by building trust. If your company is successful, and reaches product-market fit, these early founders and employees usually go on to lead whole departments. Use these early years to build trust and stay in sync, because when you are a thousand-person company, you won’t have the time for long 10-hour conversations when you hang out in the evening.
> As you scale, you spend down that trust that you built in the early days. As you succeed, it’s hard to know who is here because they really believe in your vision, and who just wants to make sure they get a big enough cut of the pie. That early trust is what keeps you agile and capable, and frequently as we see founders leave an organization, and with that those crucial trust relationships, we see the organization ossify, internal tensions increase, and the ability to effectively correspond to crises and changing environments get worse.
It’s hard to say how well this model actually applies to startups or young organizations (it matches some of my observations, though definitely far from perfectly), and even more dubious how well it applies to systems like our community, but my current model is that it captures something pretty important.
I think whether we want it or not, I think we are now likely in the post-product-market fit part of the lifecycle of our community, at least when it comes to building trust relationships and onboarding new people. I think we have become high-profile enough, and have enough visible resources (especially with FTX’s latest funding announcements), and have gotten involved in enough high-stakes politics, that if someone shows up next year at EA Global, you can no longer confidently know whether they are there because they have a deeply shared vision of the future with you, or because they want to get a big share of the pie that seems to be up for the taking around here.
I think in some sense that is good. When I see all the talk about megaprojects and increasing people’s salaries and government interventions, I feel excited and hopeful that maybe if we play our cards right, we could actually bring any measurable fraction of humanity’s ingenuity and energy to bear on preventing humanity’s extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.
But I am also afraid that with all of these resources around, we are transforming our ecosystem into a market for lemons. That we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them, and that nuance and complexity will have to get left at the wayside in order to successfully maintain any sense of order and coherence.
I think it is not implausible that for a substantial fraction of the leadership of EA, within 5 years, there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status. For many months, they will sit down many days a week and ask themselves the question “how can I write this grant proposal in a way that person X will approve of” or “how can I impress these people at organization Y so that I can get a job there?”, and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.
I think almost every publicly visible billionaire has whole ecosystems spring up around them that try to do this. I know some of the details here for Peter Thiel, and the “Thielosphere”, which seems to have a lot of these dynamics. Almost any academic at a big lab will openly tell you that among the most crucial pieces of knowledge that any new student learns when they join, is how to write grant proposals that actually get accepted. When I ask academics in competitive fields about the content of their lunch conversations in their labs, the fraction of their cognition and conversations that goes specifically to “how do I impress tenure review committees and grant committees” and “how do I network myself into an academic position that allows me to do what I want” ranges from 25% to 75% (with the median around 50%).
I think there will still be real opportunities to build new and flourishing trust relationships, and I don’t think that it will be impossible for us to really come to trust someone who joins our efforts after we have become ‘cool,’ but I do think it will be harder. I also think we should cherish and value the trust relationships we do have between the people who got involved with things earlier, because I do think that lack of doubt of why someone’s here is a really valuable resource, and one that I expect is more and more likely to be a bottleneck in the coming years.
Reading this, I guess I’ll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion:
When I play forward the future, I can imagine a few different outcomes, assuming that my basic hunches about the dynamics here are correct at all:
I think it would not surprise me that much if many of us do fall prey to the temptation to use the wealth and resources around us for personal gain, or as a tool towards building our own empire, or come to equate “big” with “good”. I think the world’s smartest people will generally pick up on us not really aiming for the common good, but I do think we have a lot of trust to spend down, and could potentially keep this up for a few years. I expect eventually this will cause the decline of our reputation and ability to really attract resources and talent, and hopefully something new and good will form in our ashes before the story of humanity ends.
But I think in many, possibly most, of the worlds where we start spending resources aggressively, whether for personal gain, or because we do really have a bold vision for how to change the future, the relationships of the central benefactors to the community will change. I think it’s easy to forget that for most of us, the reputation and wealth of the community is ultimately borrowed, and when Dustin, or Cari or Sam or Jaan or Eliezer or Nick Bostrom see how their reputation or resources get used, they will already be on high-alert for people trying to take their name and their resources, and be ready to take them away when it seems like they are no longer obviously used for public benefit. I think in many of those worlds we will be forced to run projects in a legible way; or we will choose to run them illegibly, and be surprised by how few of the “pledged” resources were ultimately available for them.
And of course in many other worlds, we learn to handle the pressures of an ecosystem where trust is harder to come by, and we scale, and find new ways of building trust, and take advantage of the resources at our fingertips.
Or maybe we split up into different factions and groups, and let many of the resources that we could reach go to waste, as they ultimately get used by people who don’t seem very aligned to us, but some of us think this loss is worth it to maintain an environment where we can think more freely and with less pressure.
Of course, all of this is likely to be far too detailed to be an accurate prediction of what will happen. I expect reality will successfully surprise me, and I am not at all confident I am reading the dynamics of the situation correctly. But the above is where my current thinking is at, and is the closest to a single expectation I can form, at least when trying to forecast what will happen to people currently in EA leadership.
To also take a bit more of an object-level stance, I currently very tentatively believe that I don’t think this shift is worth it. I don’t actually really have any plans that seem hopeful or exciting to me that really scale with a lot more money or a lot more resources, and I would really prefer to spend more time without needing to be worried about full-time people trying to scheme how to get specifically me to like them.
However, I do see the hope and potential in actually going out and spending the money and reputation we have to maybe get much larger fractions of the world’s talent to dedicate themselves to ensuring a flourishing future and preventing humanity’s extinction. I have inklings and plans that could maybe scale. But I am worried that I’ve already started trying to primarily answer the question “but what plans can meaningfully absorb all this money?” instead of the question of “but what plans actually have the highest chance of success?”, and that this substitution has made me worse, not better, at actually solving the problem.
I think historically we’ve lacked important forms of ambition. And I am excited about us actually thinking big. But I currently don’t know how to do it well. Hopefully this memo will make the conversations about this better, and maybe will help us orient towards this situation more healthily.
To onlookers: There’s a often a low amount of resolution and expertise in some comments and concerns on the LW and EAF, and this creates “bycatch” and reduces clarity. With uncertainty, I’ll lay out one story that seems like it matches the concerns in the parent comment.
Strong Spending
I’m not entirely sure this is correct, but for large EA spending, I usually think of the following:
30%-70% growth in head count in established institutions, sustained for multiple years
Near six figure salaries for junior talent, and well over six figure salaries for very good talent and management who can scale and build an organization (people who can earn multiple times that in the private sector and cause an organization to exist and have impact)
Seven figure salaries for extreme talent (world’s best applied math, CS, top lawyers)
Discretionary spending
Buying operations, consulting and other services
So all the above is manageable, even sort of fundamental for a good leader or ED or CEO. This is why quality CEO or leadership is so important, to hire and integrate this talent well and manage this spending. This is OK.
This is considered “high”, but it’s not really by real world standards.
Now distinct from the above comment, there’s a whole other reference class of spending where:
People can get an amount of cash that is a large fraction of all spending in an existing EA cause area in one raise.
The internal environment is largely “deep tech” or not related to customers or operations
So I’m thinking about valuations in the 2010- tech sector for trendy companies.
I’m not sure, but my model of organizations that can raise 8 figures per person in a series B, for spending that is pretty much purely CapEx (as opposed to capital to support operations or lower margin activity, e.g. inventory, logistics) has internal activity that is really, really different than the above “high” spending in the above comment.
There’s issues here, that are hard to appreciate.
So Facebook’s raises were really hot and oversubscribed. But building the company was a drama fest for the founders, and also there was a nuclear reactor hot business with viral growth. So that’s epic fires to put out every week, customers and partners, actual scaling issues of hockey stick growth (not this meta-business advice discussion on the forum). It’s a mess. So CEO and even junior people have to deal.
But once you’re just raising that amount in deep tech mode, my guesses for how people think, feel, and behave inside of that company with valuations in the 8-9 figures per person. My guess is that the attractiveness, incentives and beliefs in that environment, are really different than even the hottest startups, even above those where junior people exit with 7 figures of income.
To be concrete, the issues on the rest of EA might be that:
Even strong EA CEOs won’t be able to hire many EA talent like software developers (but they should be worried about hiring pretty much anyone really). If they hire, they won’t be able to keep them at comfortable, above EA salaries, without worrying about attrition.
Every person who can convincingly claim interest or signal interest in a cause area is inherently going to be treated very differently in any discussion, interaction, in a deep way that I don’t EA has seen.
The dynamics emerge that good people won’t feel comfortable adding this to their cause area anymore.
Again, this is not “strong spending” but the “next level, next level” world of both funding that is hard to match in human history in any for-profit, plus the nature of work that is different than any other.
I’m not sure, but in situations where this sort of dynamic or resource gradient happens, this isn’t resolved by the high gradient stopping (people don’t stop funding or founding institutions), because the original money is driven by underlying forces that is really strong. My guess is that a lot of this would be counter productive.
Typically in those situations, I think the best path is moderation and focusing on development and culture in other cause areas.
These are some very important points, thanks for taking the time to write them out.
I just made an account here, though I’ve only ever commented on LW before, just to stress how important and vital it is to soberly assess the change in incentives. Because even the best have strengths and weaknesses that need to be adapted to.
“Show me the incentives and I will show you the outcome”—Charlie Munger
I thought this comment was valuable and it’s also a concern I have.
It makes me wonder if some of the “original EA norms”, like donating a substantial proportion of income or becoming vegan, might still be quite important to build trust, even as they seem less important in the grand scheme of things (mostly, the increase in the proportion of people believing in longtermism). This post makes a case for signalling.
It also seems to increase the importance of vetting people in somewhat creative ways. For instance, did they demonstrate altruistic things before they knew there was lots of money in EA? I know EAs who spent a lot of their childhoods volunteering, told their families to stop giving them birthday presents and instead donate to charities, became vegan at a young age at their own initiative, were interested in utilitarianism very young, adopted certain prosocial beliefs their communities didn’t have, etc. When somebody did such things long before it was “cool” or they knew there was anything in it for them, this demonstrates something, even if they didn’t become involved with EA until it might help their self-interest. At least until we have Silicon Valley parents making sure their children do all the maximally effective things starting at age 8.
It’s kind of useful to consider an example, and the only example I can really give on the EA forum is myself. I went to one of my first EA events partially because I wanted a job, but I didn’t know that there was so much money in EA until I was somewhat involved (also this was Fall 2019, so there was somewhat less money). I did some of the things I mentioned above when I was a kid (or at least, so I claim on the EA forum)! Would I trust me immediately if I met me? Eh, a bit but not a lot, partially because I’m one of the hundreds of undergrads somewhere near AI safety technical research and not (e.g.) an animal welfare person. It would be significantly easier if I’d gotten involved in 2015 and harder if I’d gotten involved in 2021.
Part of what this means is that we can’t rely on trust so much anymore. We have to rely on cold, hard, accomplishments. It’s harder, it’s more work, it feels less warm and fuzzy, but it seems necessary in this second phase. This means we have to be better about evaluating accomplishments in ways that don’t rely on social proof. I think this is easier in some fields (e.g. earning to give, distributing bednets) than others (e.g. policy), but we should try in all fields.
How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The reason why that’s normally negative is due to that money then not being used in a more cost-effective manner, but if our primary problem is spending enough money in the first place, that may not be much of an issue at all.
I think it’s easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc.
Besides, I don’t think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking “how can I spend money quick?” rather than “how can I ambitiously do good?” EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it’s mismanaged (see, for an extreme example, Enron).
Thomas—excellent reply, and good points. I’ve written a bit about virtue signaling, and agree that there are good forms (reliable, predictive) and bad forms (cheap talk, deceptive, misguided) of virtue signaling.
I also agree that EA could be more creative and broad-minded about what kinds of virtue signaling are likely to be helpful in predictive future integrity, dedication, and constructiveness in EA. Historically, a lot of EA signaling has involved living frugally, being vegan, being a good house-mate in an EA shared house, collaborating well on EA projects, getting lots of upvotes on EA Forum, etc. Assessing those signals accurately requires a lot of first-hand or second-hand knowledge, which can be hard to do at scale, as the EA movement grows.
As EA grows in scale and becomes more diverse in terms of background (e.g. recruits more established professionals from other fields, not just recent college grads), we may need to get savvier about domain-specific virtue signals, e.g. how do medical researchers vs geopolitical security experts vs defense attorneys vs bioethicists vs blockchain developers show their true colors?
The very tricky trade-off, IMHO, is that often the most reliable virtue signals in terms of predicting personality traits (honesty, humility, conscientiousness, kindness) are often the least efficient in terms of actually accomplishing real-world good. For example, defense attorneys who do a lot of pro bono work doing appeals for death row inmates might be showing genuine dedication and altruism—but this might be among the least effective uses of their time in achieving criminal justice reform. So, do we want the super-trustworthy but scope-insensitive lawyers involved in EA, or the slightly less virtue-signaling but more rational and scope-sensitive lawyers?
That seems like a real dilemma. Traditionally, EA has solved it mostly by expecting a fair amount of private personality-signaling (e.g. being a conscientious vegan house-mate) plus a lot of public, hyper-rational, scope-sensitive analysis and discussion.
I share your worries about the effects on culture. At the same time I don’t see this vision as bad:
For many months, they will sit down many days a week and ask themselves the question “how can I write this grant proposal in a way that person X will approve of” or “how can I impress these people at organization Y so that I can get a job there?”, and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.
Imagine a global health charity that wants to get on the GiveWell Top Charities list. Wouldn’t we want it to spend much time thinking about how to get there, ultimately changing the way it works in order to come up with the evidence needed to get included? For example, Helen Keller International was founded more than 100 years ago and its vitamin A supplementation program is recommended by GiveWell. I would love to see more external organisations change in order to get EA grants instead of us trying to reinvent the wheel where others might already be good.
Organisations getting started or changing based on the available funding of the EA community seems like a win to me. As long as they have a mission that is aligned with what EA funders want and they are internally mission-aligned we should be fine. I don’t know enough about Anthropic for example but they just raised $580M mainly from EAs while not intending to make a profit. This could be a good signal to more organisations out there trying to set up a model where they are interesting to EA funders.
In the end, it comes down to the research and decision making of the grantmaker. GiveWell has a process where they evaluate charities based on effectiveness. In the longterism and meta space, we often don’t have such evidence so we may sometimes rely more on the value alignment of people. Ideally, we would want to reduce this dependence and see more ways to independently evaluate grants regardless of the people getting them.
I was going to write an elaborate rebuttal of the parent comment.
In that rebuttal, I was going to say there’s a striking lack of confidence. The concerns seems like a pretty broad argument against building any business or non-profit organization with a virtuous culture. There’s many counterexamples against this argument—and most have the additional burden of balancing that growth while tackling existential issues like funding.
It’s also curious that corruption and unwieldly growth has to set in exactly now, versus say with the $8B in 2019.
I don’t know enough about Anthropic for example but they just raised $580M mainly from EAs while not intending to make a profit. This could be a good signal to more organisations out there trying to set up a model where they are interesting to EA funders.
Now I sort of see how, combined with several other factors, how maintaining culture and dealing with adverse selection (“lemons”) might be an issue.
there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status
IMO, a reasonable analogy here is to the relationship between startups and VCs.
What do VCs do to weed out the lemons here? Market forces help in the long run (which we won’t have to the same degree) but surely they must be able to do this to some degree initially.
If you want to get a lot of money for your project, EA grants are not the way to do it. Because of the strong philosophical principles of the EA community, we are more skeptical and rigorous than just about any funding source out there. Granted, I don’t actually know much about the nonprofit grant space as a whole: if it comes to the point that EA grants are basically the only game in town for nonprofit funding, then maybe it could become an issue. But if that becomes the case I think we are in a very good position and I believe we could come up with some solutions.
Almost all nonprofit grants usually require everyone to take very low salaries. There are very few well-paying nonprofit projects. My guess is EA is the most widely-known community that might pay high salaries for relatively illegible nonprofit projects (and maybe the only widely-known funder/community that pays high-salaries for nonprofit projects in-general).
I think I’m less worried about the risk of increased deception.
you won’t have the time for long 10-hour conversations when you hang out in the evening.
The analogy breaks down somewhat because these number of 10-hour conversations are also scaling with size of the movement, right? And I think it’s relatively discernible whether somebody actually cares about doing good when you talk to them a lot. I don’t think you need to be a particularly senior EA for noticing altruistic and impact-driven intentions.
we could actually bring any measurable fraction of humanity’s ingenuity and energy to bear on preventing humanity’s extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.
Additionally I’m also less worried because I think most people actually also care about doing good and doing things efficiently. EA will still select for people who are less motivated to work in industry, where I expect wages to still be higher for somebody capable enough to scheme a great grant proposal.
Very good point on culture. Culture eats strategy for breakfast as they say. EA is definitely strategy heavy and I think your comment brings up a very important issue to investigate.
For many months, they will sit down many days a week and ask themselves the question “how can I write this grant proposal in a way that person X will approve of” or “how can I impress these people at organization Y so that I can get a job there?”
I would flip this and say, it’s inevitable that this will happen, so what do we do about it? There are areas we can learn from:
Academia, as you mention—what do we want to avoid here? Which bits actually work well?
Organisations that have grown very rapidly and/or grown in a way that changes their nature. On a for-profit basis—Facebook as a cautionary tale of what happens when personal control and association isn’t matched with institutional development? On a not-for-profit basis—I work for Greenpeace and we’re certainly very different to what we were decades ago, with a mix of ‘true believers’ and people semi-aligned, generally in more support roles. Some would say we’ve sold out and indeed some people have abandoned us for other groups that are more similar to our early days, but we certainly have a lot more political influence that we did when we were primarily a direct action / protest group.
Corruption studies at a national level. What can we learn of the institutions of very low corruption countries e.g. in Scandinavia that we might adapt?
I think the question is predictivity. How can you run the most predictive systems possible for selecting good grants/employing suitable people?
I guess over time, networks will be worse predictors and the average trustworthiness of applicants will fall slightly, to which we should respond accordingly.
Though I guess that we have to ackowledge that some grants will be misspent and that the optimal amount of bad grants may not be 0.
Definitely agree that networks will become worse predictors and ultimately grants, job offers etc. will become more impersonal. This isn’t entirely a bad thing. For example personal and network-oriented approaches have significant issues around inclusivity that well-designed systems can avoid, especially if the original network is pretty concentrated and similar (see: the pic in the original post...)
As this happens this may also mean that over time people who have been in EA for a while may feel that ‘over time the average person in the movement feels less similar to them’. This is a good thing!… if recognised, and well-managed, and people are willing to make the cognitive effort to make it work.
I feel like this post mostly doesn’t talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding.
I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have.
Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this:
========
Reading this, I guess I’ll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion:
To onlookers: There’s a often a low amount of resolution and expertise in some comments and concerns on the LW and EAF, and this creates “bycatch” and reduces clarity. With uncertainty, I’ll lay out one story that seems like it matches the concerns in the parent comment.
Strong Spending
I’m not entirely sure this is correct, but for large EA spending, I usually think of the following:
30%-70% growth in head count in established institutions, sustained for multiple years
Near six figure salaries for junior talent, and well over six figure salaries for very good talent and management who can scale and build an organization (people who can earn multiple times that in the private sector and cause an organization to exist and have impact)
Seven figure salaries for extreme talent (world’s best applied math, CS, top lawyers)
Discretionary spending
Buying operations, consulting and other services
So all the above is manageable, even sort of fundamental for a good leader or ED or CEO. This is why quality CEO or leadership is so important, to hire and integrate this talent well and manage this spending. This is OK.
This is considered “high”, but it’s not really by real world standards.
Next-level Next-level
Now distinct from the above comment, there’s a whole other reference class of spending where:
People can get an amount of cash that is a large fraction of all spending in an existing EA cause area in one raise.
The internal environment is largely “deep tech” or not related to customers or operations
So I’m thinking about valuations in the 2010- tech sector for trendy companies.
I’m not sure, but my model of organizations that can raise 8 figures per person in a series B, for spending that is pretty much purely CapEx (as opposed to capital to support operations or lower margin activity, e.g. inventory, logistics) has internal activity that is really, really different than the above “high” spending in the above comment.
There’s issues here, that are hard to appreciate.
So Facebook’s raises were really hot and oversubscribed. But building the company was a drama fest for the founders, and also there was a nuclear reactor hot business with viral growth. So that’s epic fires to put out every week, customers and partners, actual scaling issues of hockey stick growth (not this meta-business advice discussion on the forum). It’s a mess. So CEO and even junior people have to deal.
But once you’re just raising that amount in deep tech mode, my guesses for how people think, feel, and behave inside of that company with valuations in the 8-9 figures per person. My guess is that the attractiveness, incentives and beliefs in that environment, are really different than even the hottest startups, even above those where junior people exit with 7 figures of income.
To be concrete, the issues on the rest of EA might be that:
Even strong EA CEOs won’t be able to hire many EA talent like software developers (but they should be worried about hiring pretty much anyone really). If they hire, they won’t be able to keep them at comfortable, above EA salaries, without worrying about attrition.
Every person who can convincingly claim interest or signal interest in a cause area is inherently going to be treated very differently in any discussion, interaction, in a deep way that I don’t EA has seen.
The dynamics emerge that good people won’t feel comfortable adding this to their cause area anymore.
Again, this is not “strong spending” but the “next level, next level” world of both funding that is hard to match in human history in any for-profit, plus the nature of work that is different than any other.
I’m not sure, but in situations where this sort of dynamic or resource gradient happens, this isn’t resolved by the high gradient stopping (people don’t stop funding or founding institutions), because the original money is driven by underlying forces that is really strong. My guess is that a lot of this would be counter productive.
Typically in those situations, I think the best path is moderation and focusing on development and culture in other cause areas.
These are some very important points, thanks for taking the time to write them out.
I just made an account here, though I’ve only ever commented on LW before, just to stress how important and vital it is to soberly assess the change in incentives. Because even the best have strengths and weaknesses that need to be adapted to.
“Show me the incentives and I will show you the outcome”—Charlie Munger
I thought this comment was valuable and it’s also a concern I have.
It makes me wonder if some of the “original EA norms”, like donating a substantial proportion of income or becoming vegan, might still be quite important to build trust, even as they seem less important in the grand scheme of things (mostly, the increase in the proportion of people believing in longtermism). This post makes a case for signalling.
It also seems to increase the importance of vetting people in somewhat creative ways. For instance, did they demonstrate altruistic things before they knew there was lots of money in EA? I know EAs who spent a lot of their childhoods volunteering, told their families to stop giving them birthday presents and instead donate to charities, became vegan at a young age at their own initiative, were interested in utilitarianism very young, adopted certain prosocial beliefs their communities didn’t have, etc. When somebody did such things long before it was “cool” or they knew there was anything in it for them, this demonstrates something, even if they didn’t become involved with EA until it might help their self-interest. At least until we have Silicon Valley parents making sure their children do all the maximally effective things starting at age 8.
It’s kind of useful to consider an example, and the only example I can really give on the EA forum is myself. I went to one of my first EA events partially because I wanted a job, but I didn’t know that there was so much money in EA until I was somewhat involved (also this was Fall 2019, so there was somewhat less money). I did some of the things I mentioned above when I was a kid (or at least, so I claim on the EA forum)! Would I trust me immediately if I met me? Eh, a bit but not a lot, partially because I’m one of the hundreds of undergrads somewhere near AI safety technical research and not (e.g.) an animal welfare person. It would be significantly easier if I’d gotten involved in 2015 and harder if I’d gotten involved in 2021.
Part of what this means is that we can’t rely on trust so much anymore. We have to rely on cold, hard, accomplishments. It’s harder, it’s more work, it feels less warm and fuzzy, but it seems necessary in this second phase. This means we have to be better about evaluating accomplishments in ways that don’t rely on social proof. I think this is easier in some fields (e.g. earning to give, distributing bednets) than others (e.g. policy), but we should try in all fields.
How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The reason why that’s normally negative is due to that money then not being used in a more cost-effective manner, but if our primary problem is spending enough money in the first place, that may not be much of an issue at all.
I think it’s easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc.
Besides, I don’t think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking “how can I spend money quick?” rather than “how can I ambitiously do good?” EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it’s mismanaged (see, for an extreme example, Enron).
That’s a fair point, thank you for bringing that up :)
Thomas—excellent reply, and good points. I’ve written a bit about virtue signaling, and agree that there are good forms (reliable, predictive) and bad forms (cheap talk, deceptive, misguided) of virtue signaling.
I also agree that EA could be more creative and broad-minded about what kinds of virtue signaling are likely to be helpful in predictive future integrity, dedication, and constructiveness in EA. Historically, a lot of EA signaling has involved living frugally, being vegan, being a good house-mate in an EA shared house, collaborating well on EA projects, getting lots of upvotes on EA Forum, etc. Assessing those signals accurately requires a lot of first-hand or second-hand knowledge, which can be hard to do at scale, as the EA movement grows.
As EA grows in scale and becomes more diverse in terms of background (e.g. recruits more established professionals from other fields, not just recent college grads), we may need to get savvier about domain-specific virtue signals, e.g. how do medical researchers vs geopolitical security experts vs defense attorneys vs bioethicists vs blockchain developers show their true colors?
The very tricky trade-off, IMHO, is that often the most reliable virtue signals in terms of predicting personality traits (honesty, humility, conscientiousness, kindness) are often the least efficient in terms of actually accomplishing real-world good. For example, defense attorneys who do a lot of pro bono work doing appeals for death row inmates might be showing genuine dedication and altruism—but this might be among the least effective uses of their time in achieving criminal justice reform. So, do we want the super-trustworthy but scope-insensitive lawyers involved in EA, or the slightly less virtue-signaling but more rational and scope-sensitive lawyers?
That seems like a real dilemma. Traditionally, EA has solved it mostly by expecting a fair amount of private personality-signaling (e.g. being a conscientious vegan house-mate) plus a lot of public, hyper-rational, scope-sensitive analysis and discussion.
I share your worries about the effects on culture. At the same time I don’t see this vision as bad:
Imagine a global health charity that wants to get on the GiveWell Top Charities list. Wouldn’t we want it to spend much time thinking about how to get there, ultimately changing the way it works in order to come up with the evidence needed to get included? For example, Helen Keller International was founded more than 100 years ago and its vitamin A supplementation program is recommended by GiveWell. I would love to see more external organisations change in order to get EA grants instead of us trying to reinvent the wheel where others might already be good.
Organisations getting started or changing based on the available funding of the EA community seems like a win to me. As long as they have a mission that is aligned with what EA funders want and they are internally mission-aligned we should be fine. I don’t know enough about Anthropic for example but they just raised $580M mainly from EAs while not intending to make a profit. This could be a good signal to more organisations out there trying to set up a model where they are interesting to EA funders.
In the end, it comes down to the research and decision making of the grantmaker. GiveWell has a process where they evaluate charities based on effectiveness. In the longterism and meta space, we often don’t have such evidence so we may sometimes rely more on the value alignment of people. Ideally, we would want to reduce this dependence and see more ways to independently evaluate grants regardless of the people getting them.
I was going to write an elaborate rebuttal of the parent comment.
In that rebuttal, I was going to say there’s a striking lack of confidence. The concerns seems like a pretty broad argument against building any business or non-profit organization with a virtuous culture. There’s many counterexamples against this argument—and most have the additional burden of balancing that growth while tackling existential issues like funding.
It’s also curious that corruption and unwieldly growth has to set in exactly now, versus say with the $8B in 2019.
Now I sort of see how, combined with several other factors, how maintaining culture and dealing with adverse selection (“lemons”) might be an issue.
IMO, a reasonable analogy here is to the relationship between startups and VCs.
What do VCs do to weed out the lemons here? Market forces help in the long run (which we won’t have to the same degree) but surely they must be able to do this to some degree initially.
If you want to get a lot of money for your project, EA grants are not the way to do it. Because of the strong philosophical principles of the EA community, we are more skeptical and rigorous than just about any funding source out there. Granted, I don’t actually know much about the nonprofit grant space as a whole: if it comes to the point that EA grants are basically the only game in town for nonprofit funding, then maybe it could become an issue. But if that becomes the case I think we are in a very good position and I believe we could come up with some solutions.
Almost all nonprofit grants usually require everyone to take very low salaries. There are very few well-paying nonprofit projects. My guess is EA is the most widely-known community that might pay high salaries for relatively illegible nonprofit projects (and maybe the only widely-known funder/community that pays high-salaries for nonprofit projects in-general).
I think I’m less worried about the risk of increased deception.
The analogy breaks down somewhat because these number of 10-hour conversations are also scaling with size of the movement, right? And I think it’s relatively discernible whether somebody actually cares about doing good when you talk to them a lot. I don’t think you need to be a particularly senior EA for noticing altruistic and impact-driven intentions.
Additionally I’m also less worried because I think most people actually also care about doing good and doing things efficiently. EA will still select for people who are less motivated to work in industry, where I expect wages to still be higher for somebody capable enough to scheme a great grant proposal.
Very good point on culture. Culture eats strategy for breakfast as they say. EA is definitely strategy heavy and I think your comment brings up a very important issue to investigate.
I would flip this and say, it’s inevitable that this will happen, so what do we do about it? There are areas we can learn from:
Academia, as you mention—what do we want to avoid here? Which bits actually work well?
Organisations that have grown very rapidly and/or grown in a way that changes their nature. On a for-profit basis—Facebook as a cautionary tale of what happens when personal control and association isn’t matched with institutional development? On a not-for-profit basis—I work for Greenpeace and we’re certainly very different to what we were decades ago, with a mix of ‘true believers’ and people semi-aligned, generally in more support roles. Some would say we’ve sold out and indeed some people have abandoned us for other groups that are more similar to our early days, but we certainly have a lot more political influence that we did when we were primarily a direct action / protest group.
Corruption studies at a national level. What can we learn of the institutions of very low corruption countries e.g. in Scandinavia that we might adapt?
I think the question is predictivity. How can you run the most predictive systems possible for selecting good grants/employing suitable people?
I guess over time, networks will be worse predictors and the average trustworthiness of applicants will fall slightly, to which we should respond accordingly.
Though I guess that we have to ackowledge that some grants will be misspent and that the optimal amount of bad grants may not be 0.
Definitely agree that networks will become worse predictors and ultimately grants, job offers etc. will become more impersonal. This isn’t entirely a bad thing. For example personal and network-oriented approaches have significant issues around inclusivity that well-designed systems can avoid, especially if the original network is pretty concentrated and similar (see: the pic in the original post...)
As this happens this may also mean that over time people who have been in EA for a while may feel that ‘over time the average person in the movement feels less similar to them’. This is a good thing!… if recognised, and well-managed, and people are willing to make the cognitive effort to make it work.