One possible related framing is “what types of people/thinking styles/archetypes does the EA or AI safety community most need*?” Do we need:
Soldiers?
Rough definition: People who’re capable and willing to “do what needs to be done.” Can be pointed along a well-defined goal and execute well according to it.
In this world, most of the problems are well-known. We may not be strong enough enough to solve the problems, but we know what the problems are. It just takes grit and hardened determination and occasionally local knowledge to solve the problems well.
What I like about this definition: Emphasizes grit. Emphasizes that often it just requires people willing to do the important grunt work, and sacrifice prestige games to do the most important thing (control-F for “enormous amounts of shit”)
Potential failure modes: Intellectual stagnation. “Soldier mindset” in the bad sense of the term.
Philosophers?
Rough definition: Thinkers willing to play with and toy around with lots of possibilities. Are often bad at keeping their “eye on the ball,” and perhaps general judgement, but good at spontaneous and creative intellectual jumps.
In this world, we may have dire problems, but the most important ones are poorly understood. We have a lot of uncertainty over what the problems are even there, never mind how to solve them.
What I like about this definition: Emphasizes the uncertain nature of the problems, and the need for creativity in thought to fix the issues.
Potential failure modes: Focusing on interesting problems over important problems. Too much abstraction or “meta.”
Rough definition: Thinkers/strategists willing to consider a large range of abstractions in the service of a (probably) just goal.
In this world, we have moral problems in the world, and we have a responsibility to fix it. This can’t entirely be solved on pure grit, and requires careful judgements of risk, morale, logistics, ethics, etc. But still there’s clear goals in mind, and “eye on the ball” is really important to achieving such goals. There’s also a lot of responsibilities (if you fail, your people die. Worse, your side might lose the war and fascists/communists/whatever might take over).
What I like about this definition: Emphasizes a balance between thoughtfulness and determination.
Potential failure modes: Fighting the “wrong” war (most wars are probably bad). Prematurely abdicating responsibility for higher-level questions in favor of what’s needed to win.
Something else?
This is my current guess of what we need. “Generals” is an appealing aesthetic, but I think the problems aren’t well-defined enough, and our understanding of how to approach them too ill-formed, that thinking of us as generals in a moral war is too premature.
In the above archetypes, I feel good about “visceralness” for soldiers and maybe generals, but not for philosophers. I think I feel bad about “contemplating your own death” for all three, but especially philosophers and generals (A general who obsesses over their own death probably will make more mistakes because they aren’t trying as hard to win).
Perhaps I’m wrong. Other general-like archetypes I’ve considered is “scientists on the Manhattan Project,” and I feel warmer about Manhattan Project scientists having a visceral sense of their own death than for generals. Perhaps I’d be interested in reading about actual scientists trying to solve problems that have a high probability of affecting them one day (e.g. aging, cancer and heart disease researchers). Do they find the thought that their failures may be causally linked to their own death motivating or just depressing?
One possible related framing is “what types of people/thinking styles/archetypes does the EA or AI safety community most need*?” Do we need:
Soldiers?
Rough definition: People who’re capable and willing to “do what needs to be done.” Can be pointed along a well-defined goal and execute well according to it.
In this world, most of the problems are well-known. We may not be strong enough enough to solve the problems, but we know what the problems are. It just takes grit and hardened determination and occasionally local knowledge to solve the problems well.
What I like about this definition: Emphasizes grit. Emphasizes that often it just requires people willing to do the important grunt work, and sacrifice prestige games to do the most important thing (control-F for “enormous amounts of shit”)
Potential failure modes: Intellectual stagnation. “Soldier mindset” in the bad sense of the term.
Philosophers?
Rough definition: Thinkers willing to play with and toy around with lots of possibilities. Are often bad at keeping their “eye on the ball,” and perhaps general judgement, but good at spontaneous and creative intellectual jumps.
In this world, we may have dire problems, but the most important ones are poorly understood. We have a lot of uncertainty over what the problems are even there, never mind how to solve them.
What I like about this definition: Emphasizes the uncertain nature of the problems, and the need for creativity in thought to fix the issues.
Potential failure modes: Focusing on interesting problems over important problems. Too much abstraction or “meta.”
Generals?
Rough definition: Thinkers/strategists willing to consider a large range of abstractions in the service of a (probably) just goal.
In this world, we have moral problems in the world, and we have a responsibility to fix it. This can’t entirely be solved on pure grit, and requires careful judgements of risk, morale, logistics, ethics, etc. But still there’s clear goals in mind, and “eye on the ball” is really important to achieving such goals. There’s also a lot of responsibilities (if you fail, your people die. Worse, your side might lose the war and fascists/communists/whatever might take over).
What I like about this definition: Emphasizes a balance between thoughtfulness and determination.
Potential failure modes: Fighting the “wrong” war (most wars are probably bad). Prematurely abdicating responsibility for higher-level questions in favor of what’s needed to win.
Something else?
This is my current guess of what we need. “Generals” is an appealing aesthetic, but I think the problems aren’t well-defined enough, and our understanding of how to approach them too ill-formed, that thinking of us as generals in a moral war is too premature.
In the above archetypes, I feel good about “visceralness” for soldiers and maybe generals, but not for philosophers. I think I feel bad about “contemplating your own death” for all three, but especially philosophers and generals (A general who obsesses over their own death probably will make more mistakes because they aren’t trying as hard to win).
Perhaps I’m wrong. Other general-like archetypes I’ve considered is “scientists on the Manhattan Project,” and I feel warmer about Manhattan Project scientists having a visceral sense of their own death than for generals. Perhaps I’d be interested in reading about actual scientists trying to solve problems that have a high probability of affecting them one day (e.g. aging, cancer and heart disease researchers). Do they find the thought that their failures may be causally linked to their own death motivating or just depressing?
*as a method to suss out both selection: who should we most try to attract? And training: which virtues/mindsets is it most important to cultivate?