Organizations become dysfunctional when employees have reasons to act in a way that’s incompatible with the goals of the organization. See the principal-agent problem. Aligning the incentives of employees with the goals of the organization is nontrivial. You can see this play out in the business world:
Y Combinator companies are most interested in hiring programmers motivated by building a great product. If you’re building an app for skiers, it may be wise to skip the programming genius in favor of a solid developer who’s passionate about skiing.
For a non-Silicon Valley example, Koch Industries has grown 2000x since 1967. Rather than using breakthrough technology, connections, or cheap capital, Charles Koch credits his success to the culture he built based on “crazy ideas drawing from my interest in the philosophy of science, the scientific method, and my studies of how people can best live and work together.” On the topic of hiring, he says: “most [companies] hire first on talent and then they hope that the values are compatible with their culture… We hire first on values.”
The principal-agent problem is especially relevant in large organizations. An employee’s incentives are set up by their boss, so each additional layer of hierarchy is an opportunity for incentive problems to compound. (Real-life example from my last job: boss says I need to work on project x, even though we both know it’s not very important for the company, because finishing it makes our department look good.) This is the stuff out of which Dilbert comics are made.
But incentives are still super important in small organizations. You’d think that a company’s CEO, of all people, would be immune, but Thiel observed that high CEO salaries predict startup failure because it makes the company’s culture not “equity-focused”.
There are also benefits associated with having a team that’s unified culturally.
Using non-EA employees seems fine when incentives are easy to set up and the work is not core to the organization’s mission, e.g. ditch-digging type work.
I agree with this. Organisational culture matters a lot. I would suggest that a good strategy is hiring a mix, where there are enough people motivated by the right thing to set and maintain the culture, and those new to the culture will, for the most part, adopt it (provided the culture is based around sound principles, as EA is). This provides the benefit of (a) allowing the flexibility to hire in people with specialist skills outside current EA (b) encouraging the development of more EAs (c) providing outside perspectives that can, where appropriate, be used to improve implementation of EA principles (or refinement of principles).
(Note that what you’re most likely to be looking at in many of these cases (new people) is not “people who are opposed to EA” but more likely “people who haven’t previously encountered, thought deeply about, or had a lot of exposure to the best thinking/arguments around EA”.
(This is obviously relevant to generic culture-building, not necessarily EA-specific)
Excellent point, that’s an important reason to hire value-aligned people that I hadn’t really considered. I expect it wouldn’t matter much in some cases; for example, my understanding is that most GiveWell employees wouldn’t be doing anything particularly altruistic if they worked elsewhere, and GiveWell doesn’t seem to have substantial principal-agent problems. But I would expect you’d want to hire value-aligned employees in most cases.
Edit: Alternatively, you can benefit from hiring value-aligned people who probably wouldn’t do something similarly effective otherwise. For example, I’d expect that effective animal organizations hire some people who care about animals but otherwise would have worked at a shelter or something similarly small-scale.
GiveWell doesn’t seem to have substantial principal-agent problems
Grantmaking seems to me like an area where it’s especially important to hire value-aligned people. Handing out large amounts of money = conflict of interest opportunities galore.
It’s also hard to observe how good a job a GiveWell analyst is doing. It seems easy for poorly-aligned analysts to do suboptimal work (mainly through subtle omissions) in a way that more motivated people may not. e.g. a non-altruistic employee may not choose to highlight a crucial consideration that renders 3 months of their work irrelevant.
Organizations become dysfunctional when employees have reasons to act in a way that’s incompatible with the goals of the organization. See the principal-agent problem. Aligning the incentives of employees with the goals of the organization is nontrivial. You can see this play out in the business world:
One reason Silicon Valley wins is because companies in Silicon Valley offer meaningful equity to their employees. Peter Thiel recommends against compensating startup CEOs with lots of cash or using consultants.
Y Combinator companies are most interested in hiring programmers motivated by building a great product. If you’re building an app for skiers, it may be wise to skip the programming genius in favor of a solid developer who’s passionate about skiing.
For a non-Silicon Valley example, Koch Industries has grown 2000x since 1967. Rather than using breakthrough technology, connections, or cheap capital, Charles Koch credits his success to the culture he built based on “crazy ideas drawing from my interest in the philosophy of science, the scientific method, and my studies of how people can best live and work together.” On the topic of hiring, he says: “most [companies] hire first on talent and then they hope that the values are compatible with their culture… We hire first on values.”
The principal-agent problem is especially relevant in large organizations. An employee’s incentives are set up by their boss, so each additional layer of hierarchy is an opportunity for incentive problems to compound. (Real-life example from my last job: boss says I need to work on project x, even though we both know it’s not very important for the company, because finishing it makes our department look good.) This is the stuff out of which Dilbert comics are made.
But incentives are still super important in small organizations. You’d think that a company’s CEO, of all people, would be immune, but Thiel observed that high CEO salaries predict startup failure because it makes the company’s culture not “equity-focused”.
There are also benefits associated with having a team that’s unified culturally.
Using non-EA employees seems fine when incentives are easy to set up and the work is not core to the organization’s mission, e.g. ditch-digging type work.
I agree with this. Organisational culture matters a lot. I would suggest that a good strategy is hiring a mix, where there are enough people motivated by the right thing to set and maintain the culture, and those new to the culture will, for the most part, adopt it (provided the culture is based around sound principles, as EA is). This provides the benefit of (a) allowing the flexibility to hire in people with specialist skills outside current EA (b) encouraging the development of more EAs (c) providing outside perspectives that can, where appropriate, be used to improve implementation of EA principles (or refinement of principles).
(Note that what you’re most likely to be looking at in many of these cases (new people) is not “people who are opposed to EA” but more likely “people who haven’t previously encountered, thought deeply about, or had a lot of exposure to the best thinking/arguments around EA”.
(This is obviously relevant to generic culture-building, not necessarily EA-specific)
Excellent point, that’s an important reason to hire value-aligned people that I hadn’t really considered. I expect it wouldn’t matter much in some cases; for example, my understanding is that most GiveWell employees wouldn’t be doing anything particularly altruistic if they worked elsewhere, and GiveWell doesn’t seem to have substantial principal-agent problems. But I would expect you’d want to hire value-aligned employees in most cases.
Edit: Alternatively, you can benefit from hiring value-aligned people who probably wouldn’t do something similarly effective otherwise. For example, I’d expect that effective animal organizations hire some people who care about animals but otherwise would have worked at a shelter or something similarly small-scale.
Grantmaking seems to me like an area where it’s especially important to hire value-aligned people. Handing out large amounts of money = conflict of interest opportunities galore.
It’s also hard to observe how good a job a GiveWell analyst is doing. It seems easy for poorly-aligned analysts to do suboptimal work (mainly through subtle omissions) in a way that more motivated people may not. e.g. a non-altruistic employee may not choose to highlight a crucial consideration that renders 3 months of their work irrelevant.