On the object-level for your examples, I think for “high-impact architecture,” having people with nontrivial background in architecture is likely useful for building civilizational refuges. More directly, I’ve talked to people who think that having 1-3 EA concierge doctors in the community (who can do things like understand our cultural contexts and weird problems and prescribe medicine in jurisdictions like the US and the UK) can be extremely helpful in increasing the impact of top talent in EA. This is analogous to the impact of e.g. existing community health or mental health workers in the community.
Potentially relevant subquestions:
To what extent does work in EA require EA alignment and acculturation?
The more you think EA orgs can hire well outside of EA for projects outside of EA natural core competencies, the more it matters that EAs target a relatively small subset of high-impact careers and skillsets to specialize in.
Conversely, if you think (as I do) that alignment and acculturation is just really important for excelling in EA jobs, it matters that we have people acquiring a wider scope of jobs and skillsets.
Do we live in a “big world” or a “small world” of EA things to do?
If we think there’s a narrow set of the best actions and causes, and a small number of people working in any of them, it matters more that individuals optimize for selecting the best things to do, on a birds’ eye view.
If, conversely, we think the range of really good actions and causes is relatively wide, then it matters more that individuals weigh factors like personal fit heavily.
An potential argument here is that the profile you wrote on doctoring was in the context of back when EA was much smaller. We may expect conditions “on the ground” to have changed a lot, and while “concierge EA doctor” would be a dumb career to aspire to five years ago, perhaps it is less so now.
(I personally think we likely still live in a relatively small world, which I think undercuts my counterarguments significantly).
Relatedly, how important is EA exploration vs exploitation?
How damning is the danger of introducing people with worse epistemics into the EA movement? And is worsening epistemics the most important/salient downside risk?
What are the best ways to prevent the above from happening?
Is it having really good first-order reasoning and arguments?
Is it having really good all-things-considered views that try to track all the important considerations, including rather estoric ones?
I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with
in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion.
I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will likely be dwarfed by the meta-level impact.
On the object-level for your examples, I think for “high-impact architecture,” having people with nontrivial background in architecture is likely useful for building civilizational refuges. More directly, I’ve talked to people who think that having 1-3 EA concierge doctors in the community (who can do things like understand our cultural contexts and weird problems and prescribe medicine in jurisdictions like the US and the UK) can be extremely helpful in increasing the impact of top talent in EA. This is analogous to the impact of e.g. existing community health or mental health workers in the community.
Potentially relevant subquestions:
To what extent does work in EA require EA alignment and acculturation?
The more you think EA orgs can hire well outside of EA for projects outside of EA natural core competencies, the more it matters that EAs target a relatively small subset of high-impact careers and skillsets to specialize in.
Conversely, if you think (as I do) that alignment and acculturation is just really important for excelling in EA jobs, it matters that we have people acquiring a wider scope of jobs and skillsets.
Do we live in a “big world” or a “small world” of EA things to do?
If we think there’s a narrow set of the best actions and causes, and a small number of people working in any of them, it matters more that individuals optimize for selecting the best things to do, on a birds’ eye view.
If, conversely, we think the range of really good actions and causes is relatively wide, then it matters more that individuals weigh factors like personal fit heavily.
An potential argument here is that the profile you wrote on doctoring was in the context of back when EA was much smaller. We may expect conditions “on the ground” to have changed a lot, and while “concierge EA doctor” would be a dumb career to aspire to five years ago, perhaps it is less so now.
(I personally think we likely still live in a relatively small world, which I think undercuts my counterarguments significantly).
Relatedly, how important is EA exploration vs exploitation?
How damning is the danger of introducing people with worse epistemics into the EA movement? And is worsening epistemics the most important/salient downside risk?
What are the best ways to prevent the above from happening?
Is it having really good first-order reasoning and arguments?
Is it having really good all-things-considered views that try to track all the important considerations, including rather estoric ones?
???