How the Haste Consideration turned out to be wrong.
In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:
Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.
When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised—severely—so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn’t counter the haste consideration per se. It’s just one way that recruitment is less good than one might hope-> See AGB’s subthread].
The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally—the people who did the most good in the EA movement with me, and/or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.
Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on—as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!
I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you ‘recruit’ update with you, you presumably did much more good via recruitment than via direct work.
Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/improve particularly fast. By contrast, recruitment (or deferring to the future in general) is less valuable when you ‘have it all figured out’; you might just want to ‘get on with it’ at that point.
***
It might be easier to see with an illustrated example:
Let’s say in the year 2015 you are choosing whether to work on cause P, or to recruit for the broader EA movement. Without thinking about the question of shifting cause preferences, you decide to recruit, because you think that one year of recruiting generates (e.g.) two years of counterfactual EA effort at your level of ability.
In the year 2020, looking back on this choice, you observe that you now work on cause Q, which you think is 10x more impactful than cause P. With frustration and disappointment, you also observe that a ‘mere’ 25% of the people you recruited moved with you to cause Q, and so your original estimate of two years actually became six months (actually more because P still counts for something in this example, but ignoring that for now).
This looks bad because six months < one year, but if you focus on impact rather than time spent then you realise that you are comparing one year of work on cause P, to six months of work on cause Q. Since cause Q is 10x better, your outreach 5x outperformed direct work on P, versus the 2x you thought it would originally.
***
You can certainly plug in numbers where the above equation will come out the other way—suppose you had 99% attrition—but I guess I think they are pretty implausible? If you still think your (2) holds, I’m curious what (ballpark) numbers you would use.
Good point—this has changed my model of this particular issue a lot (it’s actually not something I’ve spent much time thinking about).
I guess we should (by default) imagine that if at time T you recruit a person, that they’ll do an activity that you would have valued, based on your beliefs at time T.
Some of us thought that recruitment was even better, in that the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/don’t.
There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might’ve used for thinking, and for reducing your uncertainties.
On balance, recruitment now seems like a pretty bad way to deal with uncertainty.
How the Haste Consideration turned out to be wrong.
In The haste consideration, Matt Wage essentially argued that given exponential movement growth, recruiting someone is very important, and that in particular, it’s important to do it sooner rather than later. After the passage of nine years, noone in the EA movement seems to believes it anymore, but it feels useful to recap what I view as the three main reasons why:
Exponential-looking movement growth will (almost certainly) level off eventually, once the ideas reach the susceptible population. So earlier outreach really only causes the movement to reach its full size at an earlier point. This has been learned from experience, as movement growth was north of 50% around 2010, but has since tapered to around 10% per year as of 2018-2020. And I’ve seen similar patterns in the AI safety field.
When you recruit someone, they may do what you want initially. But over time, your ideas about how to act may change, and they may not update with you. This has been seen in practice in the EA movement, which was highly intellectual and designed around values, rather than particular actions. People were reminded that their role is to help answer a question, not imbibe a fixed ideology. Nonetheless, members’ habits and attitudes crystallised—severely—so that now, when leaders change the message to focus on what they believe to be higher priorities, people complain that it doesn’t represent the views and interests of the movement! The same thinking persists several years later. [Edit: this doesn’t counter the haste consideration per se. It’s just one way that recruitment is less good than one might hope-> See AGB’s subthread].
The returns from one person’s movement-building activities will often level off. Basically, it’s a lot easier to recruit your best friends, than the rest of your friends. Much easier to recruit your friends of friends, than their friends. Harder to recruit once you leave university as well. I saw this personally—the people who did the most good in the EA movement with me, and/or due to me were among my best couple of friends from high school, and some of my best friends from the local LessWrong group. These efforts at recruitment during my university days seem potentially much more impactful than my direct actions. However, more recent efforts at recruitment and persuasion have also made differences, but they have been more marginal, and seem less impactful than my own direct work.
Taking all of this together, I’ve sometimes recommended university students not spend too much time on recruitment. The advice especially applies to top students, who could become a distinguished academic or policymaker later on—as their time may be better spent preparing for that future. My very rough sense is that for some, the optimal amount of time to spend recruiting may be one full-time months. For others, a full-time year. And importantly, our best estimates may change over time!
I have a few thoughts here, but my most important one is that your (2), as phrased, is an argument in favour of outreach, not against it. If you update towards a much better way of doing good, and any significant fraction of the people you ‘recruit’ update with you, you presumably did much more good via recruitment than via direct work.
Put another way, recruitment defers to question of how to do good into the future, and is therefore particularly valuable if we think our ideas are going to change/improve particularly fast. By contrast, recruitment (or deferring to the future in general) is less valuable when you ‘have it all figured out’; you might just want to ‘get on with it’ at that point.
***
It might be easier to see with an illustrated example:
Let’s say in the year 2015 you are choosing whether to work on cause P, or to recruit for the broader EA movement. Without thinking about the question of shifting cause preferences, you decide to recruit, because you think that one year of recruiting generates (e.g.) two years of counterfactual EA effort at your level of ability.
In the year 2020, looking back on this choice, you observe that you now work on cause Q, which you think is 10x more impactful than cause P. With frustration and disappointment, you also observe that a ‘mere’ 25% of the people you recruited moved with you to cause Q, and so your original estimate of two years actually became six months (actually more because P still counts for something in this example, but ignoring that for now).
This looks bad because six months < one year, but if you focus on impact rather than time spent then you realise that you are comparing one year of work on cause P, to six months of work on cause Q. Since cause Q is 10x better, your outreach 5x outperformed direct work on P, versus the 2x you thought it would originally.
***
You can certainly plug in numbers where the above equation will come out the other way—suppose you had 99% attrition—but I guess I think they are pretty implausible? If you still think your (2) holds, I’m curious what (ballpark) numbers you would use.
Good point—this has changed my model of this particular issue a lot (it’s actually not something I’ve spent much time thinking about).
I guess we should (by default) imagine that if at time T you recruit a person, that they’ll do an activity that you would have valued, based on your beliefs at time T.
Some of us thought that recruitment was even better, in that the recruited people will update their views over time. But in practice, they only update their views a little bit. So the uncertainty-bonus for recruitment is small. In particular, if you recruit people to a movement based on messaging in cause A, you should expect relatively few people to switch to cause B based on their group membership, and there may be a lot of within-movement tensions between those that do/don’t.
There are also uncertainty-penalties for recruitment. While recruiting, you crystallise your own ideas. You give up time that you might’ve used for thinking, and for reducing your uncertainties.
On balance, recruitment now seems like a pretty bad way to deal with uncertainty.