Disclaimer: I have disagreeable tendencies, working on it but biased. I think you’re getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.
I want to add some **speculations** on these roles in the context of the level at which we’re trying to achieve something: individual or collective.
When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).
This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as “failure”, overconfidence seems basically required to keep going as a disagreeable.
By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.
To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.
Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.
If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you’re convinced enough of those beliefs, they often confer power on you in and of themselves.
Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.
Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).
In a world where ground truth doesn’t matter much, the power of disagreeables is to create a mob that isn’t anchored in reality but that achieves the coordination to break out of local realities.
Unfortunately, to us who have insufficient capabilities to achieve their aims—to change not just our local social reality but the human condition—creating a cult just isn’t helpful. None of us have sufficient data or compute to do it alone.
To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won’t always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.
It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that’s the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.
This is of course not just a norms questions, but also a question of infrastructure and psychology.
Thanks for the comment (this could be it’s own post). This is a lot to get through, so I’ll comment on some aspects.
I have disagreeable tendencies, working on it but biased
I have some too! I think there are times when I’m fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn’t seem worth debiasing, because I’m going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I’m not a great example of a disagreeable or an assessor, but I sometimes can lean one way in different situations.
Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
I would definitely advocate for the appreciation of both disagreeables and assessors. I agree it’s easy for assessors to team up against disagreeables (for examples, when a company gets full of MBAs), particularly when they don’t respect them.
Some Venture Capitalists might be examples of assessors who appreciate and have learned to work with disagreeables. I’m sure they spend a lot of time thinking, “Person X seems slightly insane, but no one else is crazy enough to make a startup in this space, and the downside for us is limited.”
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival.
This clearly seems bad to me. For what it’s worth, I don’t feel like I have to hide that much that I think, though maybe I’m somewhat high status. Sadly, I know that high-status people sometimes can say even less than low-status people, because they have more people paying attention and more to lose. I think we really could use improved epistemic setups somehow.
Disclaimer: I have disagreeable tendencies, working on it but biased. I think you’re getting at something useful, even if most people are somewhere in the middle. I think we should care most about the outliers on both sides because they could be extremely powerful when working together.
I want to add some **speculations** on these roles in the context of the level at which we’re trying to achieve something: individual or collective.
When no single agent can understand reality well enough to be a good principal, it seems most beneficial for the collective to consist of modestly polarized agents (this seems true from most of the literature on group decision-making and policy processes, e.g. Adaptive Rationality, Garbage Cans, and the Policy Process | Emerald Insight).
This means that the EA network should want people who are confident enough in their own world views to explore them properly, who are happy to generate new ideas through epistemic trespassing, and to explore outside of the Overton window etc. Unless your social environment productively reframes what is currently perceived as “failure”, overconfidence seems basically required to keep going as a disagreeable.
By nature, overconfidence gets punished in communities that value calibration and clear metrics of success. Disagreeables become poisonous as they feel misunderstood and good assessors become increasingly conservative. The succesful ones of the two characters build up different communities in which they are high status and extremize one another.
To succeed altogether, we need to walk the very fine line between productive epistemic trespassing and conserving what we have.
Disagreeables can quickly lose status with assessors because they seem insufficiently epistemically humble or outright nuts. Making your case against a local consensus costs you points. Not being well calibrated on what reality looks like costs you points.
If we are in a sub-optimal reality, however, effort needs to be put into defying the odds and change reality. To have the chutzpah to change a system, it helps to ignore parts of reality at times. It helps to believe that you can have sufficient power to change it. If you’re convinced enough of those beliefs, they often confer power on you in and of themselves.
Incrementally assessing baseline and then betting on the most plausible outcomes also deepens the tracks we find ourselves on. It is the safe thing to do and stabilizes society. Stability is needed if you want to make sure coordination happens. Thus, assessors rightly gain status for predicting correctly. Yet, they also reinforce existing narratives and create consensus about what the future could be like.
Consensus about the median outcome can make it harder to break out of existing dynamics because the barrier to coordinating such a break-out is even higher when everyone knows the expected outcome (e.g. odds of success of major change are low).
In a world where ground truth doesn’t matter much, the power of disagreeables is to create a mob that isn’t anchored in reality but that achieves the coordination to break out of local realities.
Unfortunately, to us who have insufficient capabilities to achieve their aims—to change not just our local social reality but the human condition—creating a cult just isn’t helpful. None of us have sufficient data or compute to do it alone.
To achieve our mission, we will need constant error correction. Plus, the universe is so large that information won’t always travel fast enough, even if there was a sufficiently swift processor. So we need to compute decentrally and somehow still coordinate.
It seems hard for single brains to be both explorers and stabilizers simultaneously, however. So as a collective, we need to appropriately value both and insure one another. Maybe we can help each other switch roles to make it easier to understand both. Instead of drawing conclusions for action at our individual levels, we need to aggregate our insights and decide on action as a collective.
As of right now, only very high status or privileged people really say what they think and most others defer to the authorities to ensure their social survival. At an individual level, that’s the right thing to do. But as a collective, we would all benefit if we enabled more value-aligned people to explore, fail and yet survive comfortably enough to be able to feed their learnings back into the collective.
This is of course not just a norms questions, but also a question of infrastructure and psychology.
Thanks for the comment (this could be it’s own post). This is a lot to get through, so I’ll comment on some aspects.
I have some too! I think there are times when I’m fairly sure my intuitions lean overconfident in a research project (due to selection effects, at least), but it doesn’t seem worth debiasing, because I’m going to be doing it for a while no matter what, and not writing about its prioritization. I feel like I’m not a great example of a disagreeable or an assessor, but I sometimes can lean one way in different situations.
I would definitely advocate for the appreciation of both disagreeables and assessors. I agree it’s easy for assessors to team up against disagreeables (for examples, when a company gets full of MBAs), particularly when they don’t respect them.
Some Venture Capitalists might be examples of assessors who appreciate and have learned to work with disagreeables. I’m sure they spend a lot of time thinking, “Person X seems slightly insane, but no one else is crazy enough to make a startup in this space, and the downside for us is limited.”
This clearly seems bad to me. For what it’s worth, I don’t feel like I have to hide that much that I think, though maybe I’m somewhat high status. Sadly, I know that high-status people sometimes can say even less than low-status people, because they have more people paying attention and more to lose. I think we really could use improved epistemic setups somehow.