I did a close read of “Epistemic health is a community issue.” The part I think is most important that you’re underemphasizing is that, according to the source you cite, “The diversity referred to here is diversity in knowledge and cognitive models,” not, as you have written, diversity “across essentially all dimensions.” In other words, for collective intelligence, we need to pick people with diverse knowledge and cognitive models relevant to the task at hand, such as having relevant but distinct professional backgrounds. For example, if you’re designing a better malaria net, you might want both a materials scientist and an epidemiologist, not two materials scientists.
Age and cultural background might be relevant in some cases, but that really depends on what you’re working on any why these demographic categories seem especially pertinent to the task at hand. If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might want to work with people from the specific villages where they will be distributed.
Worryingly, EA institutions seem to select against diversity. Hiring and funding practices often select for highly value-aligned yet inexperienced individuals over outgroup experts, university recruitment drives are deliberately targeted at the Sam Demographic (at least by proxy) and EA organisations are advised to maintain a high level of internal value-alignment to maximise operational efficiency.
As your own source says:
For a group of diverse agents to work together, an important aspect is cognitive alignment, such as commitment to group goals and shared beliefs.
EA institutions are struggling to find people who are both value-aligned and have relevant diverse cognitive models and knowledge, both of which are prerequisites for collective intelligence. This is a natural problem for EA institutions to have, so you need to explain why EA is making this tradeoff of value alignment vs. cognitive/knowledge diversity suboptimally for your critique to bite.
The 80,000 Hours website seems purpose-written for Sam, and is noticeably uninterested in people with humanities or social sciences backgrounds,[14] or those without university education. Unconscious bias is also likely to play a role here – it does everywhere else.
Where your post and the collective intelligence research you’re basing it off seem to diverge is that while you want EA to select for diversity across all dimensions, perhaps for its own sake, the CI research you cited argues that you need to select for forms of cognitive models and knowledge relevant to the task at hand. 80,000 Hours might be wrong in ignoring humanities and social science backgrounds, or those without a university education, but I think your argument would be much stronger if you articulated something specific about what those disciplines would bring to the table.
Whole disciplines exist that are overwhelmingly not value-aligned with EA. Making an effort to include them seems to me like it would add a skillset of questionable task-relevant value while creating fundamental and destructive internal conflict. Because EA does have a specific set of theses on what constitutes “effectiveness” and “altruism,” which we can try to define in the abstract but which can perhaps be better articulated by the concrete types of interventions we tend to support, such as global health and X risk issues. Not everybody supports those kinds of projects, or at least not prioritizing them as highly as we do, or using the kinds of underlying models we use to prioritize them (i.e. the ITN framework), and if they are that far removed from the mission of our movement, then we should probably not try to include them in it.
If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might want to work with people from the specific villages where they will be distributed.
And if you’re trying to run a movement dedicated to improving the entire world? Which is what we are doing?
I would come back to the model of a value-aligned group with a specific set of tasks seeking to maximize its effectiveness at achieving the objective. This is the basis for the collective intelligence research that is cited here as the basis for their recommendations for greater diversity.
If you frame EA as a single group trying to achieve the task of “make the entire world better for all human beings by implementing high-leverage interventions” then it does seem relevant to get input from a diverse cross-section of humanity about what they consider to be their biggest problems and how proposed solutions would play out.
One way to get that feedback is to directly include a demographically representative sample of humanity in EA directly as active participants. I have no problem with that outcome. I just think we can 80⁄20 it by seeking feedback on specific proposals.
I also think that basing our decisions about what to pursue based on the personal opinions of a representative sample of humanity will lead us to prioritize the selfish small issues of a powerful majority over the enormous issues faced by underrepresented minorities, such as animals, the global poor, and the denizens of the far future. I think this because I think that the vast majority of humanity is not value-aligned with the principle of altruistic utility maximization.
For these two main reasons—the ability to seek feedback from relevant demographics when necessary, and the value mismatch between EA and humanity in general—I do not see the case for us being unable to operate effectively given our current demographic makeup. I do think that additional diversity might help. I just think that it is one of a range of interventions, it’s not obvious to me that it’s the most pressing priority, and broadening EA risks to pursue diversity purely for its own sake risks value misalignment with newcomers. Please interpret this in a moderate stance along the lines of “I invite diversity, I just think it’s not the magic solution to fix all of EA’s problems with effectiveness and the important thing is ‘who does EA talk to’ more than ‘who calls themselves an EA’.”
This is the phrase where you introduce the Yang & Sandberg link:
The field of Collective Intelligence provides guidance on the traits to nurture if one wishes to build a collectively intelligent community. Forexample:
The word before the link is “community,” which does not contain a link.
I did a close read of “Epistemic health is a community issue.” The part I think is most important that you’re underemphasizing is that, according to the source you cite, “The diversity referred to here is diversity in knowledge and cognitive models,” not, as you have written, diversity “across essentially all dimensions.” In other words, for collective intelligence, we need to pick people with diverse knowledge and cognitive models relevant to the task at hand, such as having relevant but distinct professional backgrounds. For example, if you’re designing a better malaria net, you might want both a materials scientist and an epidemiologist, not two materials scientists.
Age and cultural background might be relevant in some cases, but that really depends on what you’re working on any why these demographic categories seem especially pertinent to the task at hand. If I was designing a nursing home in a team comprised of young entrepreneurs, I would want old people either to be on the team, or to be consulted with routinely as the project evolved, because adding that component of diversity would be relevant ot the project. If I was developing a team to deploy bed nets in Africa, I might want to work with people from the specific villages where they will be distributed.
As your own source says:
EA institutions are struggling to find people who are both value-aligned and have relevant diverse cognitive models and knowledge, both of which are prerequisites for collective intelligence. This is a natural problem for EA institutions to have, so you need to explain why EA is making this tradeoff of value alignment vs. cognitive/knowledge diversity suboptimally for your critique to bite.
Where your post and the collective intelligence research you’re basing it off seem to diverge is that while you want EA to select for diversity across all dimensions, perhaps for its own sake, the CI research you cited argues that you need to select for forms of cognitive models and knowledge relevant to the task at hand. 80,000 Hours might be wrong in ignoring humanities and social science backgrounds, or those without a university education, but I think your argument would be much stronger if you articulated something specific about what those disciplines would bring to the table.
Whole disciplines exist that are overwhelmingly not value-aligned with EA. Making an effort to include them seems to me like it would add a skillset of questionable task-relevant value while creating fundamental and destructive internal conflict. Because EA does have a specific set of theses on what constitutes “effectiveness” and “altruism,” which we can try to define in the abstract but which can perhaps be better articulated by the concrete types of interventions we tend to support, such as global health and X risk issues. Not everybody supports those kinds of projects, or at least not prioritizing them as highly as we do, or using the kinds of underlying models we use to prioritize them (i.e. the ITN framework), and if they are that far removed from the mission of our movement, then we should probably not try to include them in it.
And if you’re trying to run a movement dedicated to improving the entire world? Which is what we are doing?
That is a fair rebuttal.
I would come back to the model of a value-aligned group with a specific set of tasks seeking to maximize its effectiveness at achieving the objective. This is the basis for the collective intelligence research that is cited here as the basis for their recommendations for greater diversity.
If you frame EA as a single group trying to achieve the task of “make the entire world better for all human beings by implementing high-leverage interventions” then it does seem relevant to get input from a diverse cross-section of humanity about what they consider to be their biggest problems and how proposed solutions would play out.
One way to get that feedback is to directly include a demographically representative sample of humanity in EA directly as active participants. I have no problem with that outcome. I just think we can 80⁄20 it by seeking feedback on specific proposals.
I also think that basing our decisions about what to pursue based on the personal opinions of a representative sample of humanity will lead us to prioritize the selfish small issues of a powerful majority over the enormous issues faced by underrepresented minorities, such as animals, the global poor, and the denizens of the far future. I think this because I think that the vast majority of humanity is not value-aligned with the principle of altruistic utility maximization.
For these two main reasons—the ability to seek feedback from relevant demographics when necessary, and the value mismatch between EA and humanity in general—I do not see the case for us being unable to operate effectively given our current demographic makeup. I do think that additional diversity might help. I just think that it is one of a range of interventions, it’s not obvious to me that it’s the most pressing priority, and broadening EA risks to pursue diversity purely for its own sake risks value misalignment with newcomers. Please interpret this in a moderate stance along the lines of “I invite diversity, I just think it’s not the magic solution to fix all of EA’s problems with effectiveness and the important thing is ‘who does EA talk to’ more than ‘who calls themselves an EA’.”
Hi AllAmericanBreakfast,
The other points (age, cultural background, etc.) are in the Critchlow book, linked just after the paper you mention.
Where exactly is that link? I looked at the rest of the links in the section and don’t see it.
The word before the Yang & Sandberg link
This is the phrase where you introduce the Yang & Sandberg link:
The word before the link is “community,” which does not contain a link.
“For”
OOOOHHHHHHHH
Yeah, this kind of multiple-links approach doesn’t work well in this forum, since there’s no way to see that the links are separate.
I’d recommend separating links that are in neighbouring words (e.g. see here and here).