I agree that I might want to write a top-level post about this at some point. Here is a super rough version of my current model:
To do things that are as difficult as EAs are trying to do, you usually need someone to throw basically everything they have behind it, similarly to my model of early stage startups. At the same time, your success rates won’t be super high because the problems we are trying to solve are often of massive scale, often lack concrete feedback loops, and don’t have many proven solutions.
And even if you succeed some amount, it’s unlikely that you will be rewarded with a comparable amount of status or resources than you would if you were to build a successful startup. My model is that EA org success tends to look weird and not really translate into wealth or status in the broader world. This puts large cognitive strain on you, in particular given the tendency for high scrupulosity in the community, by introducing cognitive dissonance between your personal benefit and your moral ideals.
This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.
Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don’t expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.
Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don’t expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.
There’s more to say here, but for now I’ll just note that everything in the model above this paragraph is compatible with a world where burnout & mental health are very tractable & very leveraged (and also compatible with a world where they aren’t):
“throwing everything you have towards the problem” – nudge work norms, group memes, and group myths toward more longterm thinking (e.g. Gwern’s interest in Long Content and the Long Now)
“massive scale problems” – put more effort towards chunking the problems into easy-to-operationalize chunks
“lack of concrete feedback loops” – build more concrete feedback loops, and/or build work methodologies that don’t rely on concrete feedback loops (e.g. Wiles’ proof of Fermat’s Last Theorem)
“lack of proven solutions” – prove out solutions, and study what has worked for longterm-thinking cultures in the past. (Some longterm-thinking cultures: China, the Catholic Church, most of Mahayana Buddhism, Judaism)
“high-scrupulosity culture” – nudge the culture towards a lower-neuroticism equilibrium
“starved on management capacity” – study what has worked for great managers & great institutions in the past, distill lessons from that, then build a culture that trains up strong managers internally and/or attracts great managers from the broader world
Also there’s the more general strategy of learning about cultures where burnout isn’t a problem (of which there are many), and figuring out what can be brought from those cultures to EA.
I agree that I might want to write a top-level post about this at some point. Here is a super rough version of my current model:
To do things that are as difficult as EAs are trying to do, you usually need someone to throw basically everything they have behind it, similarly to my model of early stage startups. At the same time, your success rates won’t be super high because the problems we are trying to solve are often of massive scale, often lack concrete feedback loops, and don’t have many proven solutions.
And even if you succeed some amount, it’s unlikely that you will be rewarded with a comparable amount of status or resources than you would if you were to build a successful startup. My model is that EA org success tends to look weird and not really translate into wealth or status in the broader world. This puts large cognitive strain on you, in particular given the tendency for high scrupulosity in the community, by introducing cognitive dissonance between your personal benefit and your moral ideals.
This is combined with an environment that is starved on management capacity, and so has very little room to give people feedback on their plans and actions.
Overall I expect a high rate of burnout to be inevitable for quite a while to come, and even in the long-run I don’t expect that we can do much better than startup founders, at least for a lot of the people who join early-stage organizations.
Thanks for this.
There’s more to say here, but for now I’ll just note that everything in the model above this paragraph is compatible with a world where burnout & mental health are very tractable & very leveraged (and also compatible with a world where they aren’t):
“throwing everything you have towards the problem” – nudge work norms, group memes, and group myths toward more longterm thinking (e.g. Gwern’s interest in Long Content and the Long Now)
“massive scale problems” – put more effort towards chunking the problems into easy-to-operationalize chunks
“lack of concrete feedback loops” – build more concrete feedback loops, and/or build work methodologies that don’t rely on concrete feedback loops (e.g. Wiles’ proof of Fermat’s Last Theorem)
“lack of proven solutions” – prove out solutions, and study what has worked for longterm-thinking cultures in the past. (Some longterm-thinking cultures: China, the Catholic Church, most of Mahayana Buddhism, Judaism)
“high-scrupulosity culture” – nudge the culture towards a lower-neuroticism equilibrium
“starved on management capacity” – study what has worked for great managers & great institutions in the past, distill lessons from that, then build a culture that trains up strong managers internally and/or attracts great managers from the broader world
Also there’s the more general strategy of learning about cultures where burnout isn’t a problem (of which there are many), and figuring out what can be brought from those cultures to EA.