Here in the EA community, we’re trying to do lots of good. Recently I’ve been thinking about the similarities and differences between a community focused on doing lots of good and a community focused on getting really rich.
I think this is interesting for a few reasons:
I found it clarifying to articulate the main differences between how we should behave and how the wealth-seeking community should behave.
I think that EAs make mistakes that you can notice by thinking about how the wealth-seeking community would behave, and then thinking about whether there’s a good reason for us behaving differently.
——
Here are some things that I think the wealth-seeking community would do.
There are some types of people who should try to get rich by following some obvious career path that’s a good fit for them. For example, if you’re a not-particularly-entrepreneurial person who won math competitions in high school, it seems pretty plausible that you should work as a quant trader. If you think you’d succeed at being a really high-powered lawyer, maybe you should do that.
But a lot of people should probably try to become entrepreneurs. In college, they should start small businesses, develop appropriate skills (eg building web apps), start trying to make various plans about how they might develop some expertise that they could turn into a startup, and otherwise practice skills that would help them with this. These people should be thinking about what risks to take, what jobs to maybe take to develop skills that they’ll need later, and so on.
I often think about EA careers somewhat similarly:
Some people are natural good fits for particular cookie-cutter roles that give them an opportunity to have a lot of impact. For example, if you are an excellent programmer and ML researcher, I (and many other people) would love to hire you to work on applied alignment research; basically all you have to do to get these roles is to obtain those skills and then apply for a job.
But for most people, the way they will have impact is much more bespoke and relies much more on them trying to be strategic and spot good opportunities to do good things that other people wouldn’t have otherwise done.
I feel like many EAs don’t take this distinction as seriously as they should. I fear that EAs see that there exist roles of the first type—you basically just have to learn some stuff, show up, and do what you’re told, and you have a bunch of impact—and then they don’t realize that the strategy they should be following is going to involve being much more strategic and making many more hard decisions about what risks to take. Like, I want to say something like “Imagine you suddenly decided that your goal was to make ten million dollars in the next ten years. You’d be like, damn, that seems hard, I’m going to have to do something really smart in order to do that, I’d better start scheming. I want you to have more of that attitude to EA.”
Important differences:
Members of the EA community are much more aligned with each other than wealth-seeking people are. (Maybe we’re supposed to be imagining a community of people who wanted to maximize total wealth of the community for some reason.)
Opportunities for high impact are biased to be earlier in your career than opportunities for high income. (For example, running great student groups at top universities is pretty high up there in impact-per-year according to me; there isn’t really a similarly good moneymaking opportunity for which students are unusually well suited.)
The space of opportunities to do very large amounts of good seems much narrower than the space of opportunities to make money. So you end up with EAs wanting to work with each other much more than the wealth-maximizing people want to work with each other.
It seems harder to make lots of money in a weird, bespoke, non-entrepreneurial role than it is to have lots of impact. There are many EAs who have particular roles which are great fits for them and which allow them to produce a whole bunch of value. I know of relatively fewer cases where someone gets a job which seems weirdly tailored to them and is really high paid.
I think this is mostly because my sense is that in the for-profit world, it’s hard to get people to be productive in weird jobs, and you’re mostly only able to hire people for roles that everyone involved understands very well already. And so even if someone would be able to produce a huge amount of value in some particular role, it’s hard for them to get paid commensurately, because the employer will be skeptical that they’ll actually produce all that value, and other potential employers will also be skeptical and so won’t bid their price up.
If too few EAs go into more bespoke roles, then one reason could be risk-aversion. Rightly or wrongly, they may view those paths as more insecure and risky (for them personally; though I expect personal and altruistic risk correlate to a fair degree). If so, then one possibility is that EA funders and institutions/orgs should try to make them less risky or otherwise more appealing (there may already be some such projects).
In recent years, EA has put less emphasis on self-sacrifice, arguing that we can’t expect people to live on very little. There may be a parallel to risk—that we can’t expect people to take on more risk than they’re comfortable with, but instead must make the risky paths more appealing.
I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.
I’m commenting on a few Shortforms I think should be top-level posts so that more people see them, they can be tagged, etc. This is one of the clearest cases I’ve seen; I think the comparison is really interesting, and a lot of people who are promising EA candidates will have “become really rich” as a viable option, such that they’d benefit especially from thinking about this comparisons themselves.
Anyway, would you consider making this a top-level post? I don’t think the text would need to be edited all — it could be as-is, plus a link to the Shortform comments.
Something I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren’t directly about making the money yourself. Helping make connections, find housing, etc.
Reminds me of The House of Saud (although I’m not saying they have this goal, or any shared goal): ”The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family’s wealth measure their net worth at $1.4 trillion” https://en.wikipedia.org/wiki/House_of_Saud
Thanks for writing this up. At the risk of asking obvious question, I’m interested in why you think entrepreneurship is valuable in EA.
One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it’s hard to tell if someone is a good CEO apart from “does their business do well”, so they are forced to have their compensation tied closely to business outcomes (instead of something like “does their manager think they are doing a good job”), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
It’s not obvious to me that this information asymmetry exists in EA. E.g. I expect “Buck thinks X is a good group leader” correlates better with “X is a good group leader” than “Buck thinks X will be a successful startup” correlates with “X is a successful startup”.
It seems like there might be a “market failure” in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
You seem to be wise and thoughtful, but I don’t understand the premise of this question or this belief:
One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it’s hard to tell if someone is a good CEO apart from “does their business do well”, so they are forced to have their compensation tied closely to business outcomes (instead of something like “does their manager think they are doing a good job”), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
It’s not obvious to me that this information asymmetry exists in EA. E.g. I expect “Buck thinks X is a good group leader” correlates better with “X is a good group leader” than “Buck thinks X will be a successful startup” correlates with “X is a successful startup”.
But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I don’t think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. “focus”).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems like there might be a “market failure” in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
It seems that the now canonized posts Really Hard and Denise Melchin’s experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not “deserve” the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn’t seem plausible/acceptable to me. Note that currently, there is a concerted effort to foster/sweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the “production function” of work in AI/longtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like I’ve missed the point and I’m just beating a caricature of what you said.
Thanks! “EA organizations are bad” is a reasonable answer.
(In contrast, “for-profit organizations are bad” doesn’t seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isn’t something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)
Maybe there’s some lesson to be learned. And I do think that EAs should often aspire to be more entrepreneurial.
But maybe the main lesson is for the people trying to get really rich, not the other way round. I imagine both communities have their biases. I imagine that lots of people try entrepreneurial schemes for similar reasons to why lots of people buy lottery tickets. And Id guess that this often has to do with scope neglect, excessive self confidence / sense of exceptionalism, and/or desperation.
Doing lots of good vs getting really rich
Here in the EA community, we’re trying to do lots of good. Recently I’ve been thinking about the similarities and differences between a community focused on doing lots of good and a community focused on getting really rich.
I think this is interesting for a few reasons:
I found it clarifying to articulate the main differences between how we should behave and how the wealth-seeking community should behave.
I think that EAs make mistakes that you can notice by thinking about how the wealth-seeking community would behave, and then thinking about whether there’s a good reason for us behaving differently.
—— Here are some things that I think the wealth-seeking community would do.
There are some types of people who should try to get rich by following some obvious career path that’s a good fit for them. For example, if you’re a not-particularly-entrepreneurial person who won math competitions in high school, it seems pretty plausible that you should work as a quant trader. If you think you’d succeed at being a really high-powered lawyer, maybe you should do that.
But a lot of people should probably try to become entrepreneurs. In college, they should start small businesses, develop appropriate skills (eg building web apps), start trying to make various plans about how they might develop some expertise that they could turn into a startup, and otherwise practice skills that would help them with this. These people should be thinking about what risks to take, what jobs to maybe take to develop skills that they’ll need later, and so on.
I often think about EA careers somewhat similarly:
Some people are natural good fits for particular cookie-cutter roles that give them an opportunity to have a lot of impact. For example, if you are an excellent programmer and ML researcher, I (and many other people) would love to hire you to work on applied alignment research; basically all you have to do to get these roles is to obtain those skills and then apply for a job.
But for most people, the way they will have impact is much more bespoke and relies much more on them trying to be strategic and spot good opportunities to do good things that other people wouldn’t have otherwise done.
I feel like many EAs don’t take this distinction as seriously as they should. I fear that EAs see that there exist roles of the first type—you basically just have to learn some stuff, show up, and do what you’re told, and you have a bunch of impact—and then they don’t realize that the strategy they should be following is going to involve being much more strategic and making many more hard decisions about what risks to take. Like, I want to say something like “Imagine you suddenly decided that your goal was to make ten million dollars in the next ten years. You’d be like, damn, that seems hard, I’m going to have to do something really smart in order to do that, I’d better start scheming. I want you to have more of that attitude to EA.”
Important differences:
Members of the EA community are much more aligned with each other than wealth-seeking people are. (Maybe we’re supposed to be imagining a community of people who wanted to maximize total wealth of the community for some reason.)
Opportunities for high impact are biased to be earlier in your career than opportunities for high income. (For example, running great student groups at top universities is pretty high up there in impact-per-year according to me; there isn’t really a similarly good moneymaking opportunity for which students are unusually well suited.)
The space of opportunities to do very large amounts of good seems much narrower than the space of opportunities to make money. So you end up with EAs wanting to work with each other much more than the wealth-maximizing people want to work with each other.
It seems harder to make lots of money in a weird, bespoke, non-entrepreneurial role than it is to have lots of impact. There are many EAs who have particular roles which are great fits for them and which allow them to produce a whole bunch of value. I know of relatively fewer cases where someone gets a job which seems weirdly tailored to them and is really high paid.
I think this is mostly because my sense is that in the for-profit world, it’s hard to get people to be productive in weird jobs, and you’re mostly only able to hire people for roles that everyone involved understands very well already. And so even if someone would be able to produce a huge amount of value in some particular role, it’s hard for them to get paid commensurately, because the employer will be skeptical that they’ll actually produce all that value, and other potential employers will also be skeptical and so won’t bid their price up.
Thanks, this is an interesting analogy.
If too few EAs go into more bespoke roles, then one reason could be risk-aversion. Rightly or wrongly, they may view those paths as more insecure and risky (for them personally; though I expect personal and altruistic risk correlate to a fair degree). If so, then one possibility is that EA funders and institutions/orgs should try to make them less risky or otherwise more appealing (there may already be some such projects).
In recent years, EA has put less emphasis on self-sacrifice, arguing that we can’t expect people to live on very little. There may be a parallel to risk—that we can’t expect people to take on more risk than they’re comfortable with, but instead must make the risky paths more appealing.
I like this chain of reasoning. I’m trying to think of concrete examples, and it seems a bit hard to come up with clear ones, but I think this might just be a function of the bespoke-ness.
I’m commenting on a few Shortforms I think should be top-level posts so that more people see them, they can be tagged, etc. This is one of the clearest cases I’ve seen; I think the comparison is really interesting, and a lot of people who are promising EA candidates will have “become really rich” as a viable option, such that they’d benefit especially from thinking about this comparisons themselves.
Anyway, would you consider making this a top-level post? I don’t think the text would need to be edited all — it could be as-is, plus a link to the Shortform comments.
Something I imagined while reading this was being part of a strangely massive (~1000 person) extended family whose goal was to increase the net wealth of the family. I think it would be natural to join one of the family businesses, it would be natural to make your own startup, and also it would be somewhat natural to provide services for the family that aren’t directly about making the money yourself. Helping make connections, find housing, etc.
Reminds me of The House of Saud (although I’m not saying they have this goal, or any shared goal):
”The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family’s wealth measure their net worth at $1.4 trillion”
https://en.wikipedia.org/wiki/House_of_Saud
Thanks for writing this up. At the risk of asking obvious question, I’m interested in why you think entrepreneurship is valuable in EA.
One explanation for why entrepreneurship has high financial returns is information asymmetry/adverse selection: it’s hard to tell if someone is a good CEO apart from “does their business do well”, so they are forced to have their compensation tied closely to business outcomes (instead of something like “does their manager think they are doing a good job”), which have high variance; as a result of this variance and people being risk-averse, expected returns need to be high in order to compensate these entrepreneurs.
It’s not obvious to me that this information asymmetry exists in EA. E.g. I expect “Buck thinks X is a good group leader” correlates better with “X is a good group leader” than “Buck thinks X will be a successful startup” correlates with “X is a successful startup”.
It seems like there might be a “market failure” in EA where people can reasonably be known to be doing good work, but are not compensated appropriately for their work, unless they do some weird bespoke thing.
You seem to be wise and thoughtful, but I don’t understand the premise of this question or this belief:
But the reasoning [that existing orgs are often poor at rewarding/supporting/fostering new (extraordinary) leadership] seems to apply:
For example, GiveWell was a scrappy, somewhat polemical startup, and the work done there ultimately succeeded and created Open Phil and to a large degree, the present EA movement.
I don’t think any of this would have happened if Holden Karnofsky and Elie Hassenfeld had to say, go into Charity Navigator (or a dozen other low-wattage meta-charities that we will never hear of) and try to turn it around from the inside. While being somewhat vague, my models of orgs and information from EA orgs do not suggest that they are any better at this (for mostly benign, natural reasons, e.g. “focus”).
It seems that the main value of entrepreneurship is the creation of new orgs to have impact, both from the founder and from the many other staff/participants in the org.
Typically (and maybe ideally) new orgs are in wholly new territory (underserved cause areas, untried interventions) and inherently there are fewer people who can evaluate them.
It seems that the now canonized posts Really Hard and Denise Melchin’s experiences suggest this has exactly happened, extensively even. I think it is very likely that both of these people are not just useful, but are/could be highly impactful in EA and do not “deserve” the experiences their described.
[I think the main counterpoint would be that only the top X% of people are eligible for EA work or something like that and X% is quite small. I would be willing to understand this idea, but it doesn’t seem plausible/acceptable to me. Note that currently, there is a concerted effort to foster/sweep in very high potential longtermists and high potential EAs in early career stages, which seems invaluable and correct. In this effort, my guess is that the concurrent theme of focusing on very high quality candidates is related to experiences of the “production function” of work in AI/longtermism. However, I think this focus does not apply in the same way to other cause areas.]
Again, as mentioned at the top, I feel like I’ve missed the point and I’m just beating a caricature of what you said.
Thanks! “EA organizations are bad” is a reasonable answer.
(In contrast, “for-profit organizations are bad” doesn’t seem like reasonable answer for why for-profit entrepreneurship exists, as adverse selection isn’t something better organizations can reasonably get around. It seems important to distinguish these, because it tells us how much effort EA organizations should put into supporting entrepreneur-type positions.)
Maybe there’s some lesson to be learned. And I do think that EAs should often aspire to be more entrepreneurial.
But maybe the main lesson is for the people trying to get really rich, not the other way round. I imagine both communities have their biases. I imagine that lots of people try entrepreneurial schemes for similar reasons to why lots of people buy lottery tickets. And Id guess that this often has to do with scope neglect, excessive self confidence / sense of exceptionalism, and/or desperation.