When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question—such as the measles vaccine. Unless there were a measles case at my son’s daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles.
In contrast, the influenza vaccine is a weak inoculant—I definitely get vaccinated because I’ll get infected less and hospitalized less without it. But I’m not surprised when I get the flu. If I were at great risk of serious complications from the flu, then I’d only use vaccination as one layer of my mitigation strategy (and without placing undue reliance on it.) And of course there are strengths in between those two.
I’d call myself moderately cynical. I think history teaches us that the corrupting influence of power is strong and that managing this risk has been a struggle. I don’t think I need to take the position that no strong inoculant exists. It is enough to assert that—based on centuries of human experience across cultures—our starting point should be that inoculants as weak until proven otherwise by sufficient experience. And when one of the star pupils goes so badly off the rails, along with several others in his orbit, that adds to the quantum of evidence I think is necessary to overcome the general rule.
I’d add that one of the traditional ways to mitigate this risk is to observe the candidate over a long period of time in conjunction with lesser levels of power. Although it doesn’t always work well in practice, you do get some ability to measure the specific candidate’s susceptibility in lower-stakes situations. It may not be popular to say, but we just won’t have had the same potential to observe people in their 20s and 30s in intermediate-power situations that we often will have had for the 50+ crowd. Certainly people can and do fake being relatively unaffected by money and power for many years, but it’s harder to pull off than for a shorter period of time.
If anything can be an inoculant against those temptations, surely a strong adherence to a cause greater than oneself packaged in lots warnings against biases and other ways humans can go wrong (as is the common message in EA and rationalist circles) seems like the best hope for it?
Maybe. But on first principles, one might have also thought that belief in an all-powerful, all-knowing deity who will hammer you if you fall out of line would be a fairly strong inoculant. But experience teaches us that this is not so!
Also, if I had to design a practical philosophy that was maximally resistant to corruption, I’d probably ground it on virtue ethics or deontology rather than give so much weight to utilitarian considerations. The risk of the newly-powerful person deceiving themselves may be greater for a utilitarian.
--
As you imply, the follow-up question is where we go from here. I think there are three possible approaches to dealing with a weak or moderate-strength inoculant:
In some cases, a sober understanding of how strong or weak the inoculant is should lead to a decision not to proceed with a project at all.
In other cases, a sober understanding of the inoculant affects how we should weight further measures to mitigate the risk of corrupting influence versus maximizing effectiveness.
For instance, I think you’re onto something with “these people are advantaged at some aspects of ambitious leadership.” If I’m permitted a literary analogy, one could assign more weight to how much a would-be powerholder has The Spirit of Frodo in deciding who to entrust with great power. Gandalf tells us that Bilbo (and thus Frodo) were meant to have the ring, and not by its maker. The problem is that Frodo would probably make a lousy CEO in a competitive, fast-moving market, and I’m not sure you can address that without also removing something of what makes him best-suited to bear the Ring.
In still other cases, there isn’t a good alternative and there aren’t viable mitigating factors. But acknowledging the risk that is being taken is still important; it ensures we are accounting for all the risks, reminds us to prepare contingency plans, and so on.
My point is that doing these steps well requires a reasonably accurate view of inoculant strength. And I got the sense that the community is more confident in EA-as-inoculant than the combination of general human experience and the limited available evidence on EA-as-inoculant warrants.
When I speak of a strong inoculant, I mean something that is very effective in preventing the harm in question—such as the measles vaccine. Unless there were a measles case at my son’s daycare, or a family member were extremely vulnerable to measles, the protection provided by the strong inoculant is enough that I can carry on with life without thinking about measles.
In contrast, the influenza vaccine is a weak inoculant—I definitely get vaccinated because I’ll get infected less and hospitalized less without it. But I’m not surprised when I get the flu. If I were at great risk of serious complications from the flu, then I’d only use vaccination as one layer of my mitigation strategy (and without placing undue reliance on it.) And of course there are strengths in between those two.
I’d call myself moderately cynical. I think history teaches us that the corrupting influence of power is strong and that managing this risk has been a struggle. I don’t think I need to take the position that no strong inoculant exists. It is enough to assert that—based on centuries of human experience across cultures—our starting point should be that inoculants as weak until proven otherwise by sufficient experience. And when one of the star pupils goes so badly off the rails, along with several others in his orbit, that adds to the quantum of evidence I think is necessary to overcome the general rule.
I’d add that one of the traditional ways to mitigate this risk is to observe the candidate over a long period of time in conjunction with lesser levels of power. Although it doesn’t always work well in practice, you do get some ability to measure the specific candidate’s susceptibility in lower-stakes situations. It may not be popular to say, but we just won’t have had the same potential to observe people in their 20s and 30s in intermediate-power situations that we often will have had for the 50+ crowd. Certainly people can and do fake being relatively unaffected by money and power for many years, but it’s harder to pull off than for a shorter period of time.
Maybe. But on first principles, one might have also thought that belief in an all-powerful, all-knowing deity who will hammer you if you fall out of line would be a fairly strong inoculant. But experience teaches us that this is not so!
Also, if I had to design a practical philosophy that was maximally resistant to corruption, I’d probably ground it on virtue ethics or deontology rather than give so much weight to utilitarian considerations. The risk of the newly-powerful person deceiving themselves may be greater for a utilitarian.
--
As you imply, the follow-up question is where we go from here. I think there are three possible approaches to dealing with a weak or moderate-strength inoculant:
In some cases, a sober understanding of how strong or weak the inoculant is should lead to a decision not to proceed with a project at all.
In other cases, a sober understanding of the inoculant affects how we should weight further measures to mitigate the risk of corrupting influence versus maximizing effectiveness.
For instance, I think you’re onto something with “these people are advantaged at some aspects of ambitious leadership.” If I’m permitted a literary analogy, one could assign more weight to how much a would-be powerholder has The Spirit of Frodo in deciding who to entrust with great power. Gandalf tells us that Bilbo (and thus Frodo) were meant to have the ring, and not by its maker. The problem is that Frodo would probably make a lousy CEO in a competitive, fast-moving market, and I’m not sure you can address that without also removing something of what makes him best-suited to bear the Ring.
In still other cases, there isn’t a good alternative and there aren’t viable mitigating factors. But acknowledging the risk that is being taken is still important; it ensures we are accounting for all the risks, reminds us to prepare contingency plans, and so on.
My point is that doing these steps well requires a reasonably accurate view of inoculant strength. And I got the sense that the community is more confident in EA-as-inoculant than the combination of general human experience and the limited available evidence on EA-as-inoculant warrants.