Epistemic health is a community issue

Preamble

This is an extract from a post called “Doing EA Better”, which argued that EA’s new-found power and influence obligates us to solve our movement’s significant problems with respect to epistemics, rigour, expertise, governance, and power.

We are splitting DEAB up into a sequence to facilitate object-level discussion.

Each post will include the relevant parts of the list of suggested reforms. There isn’t a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

Main

Summary: The Collective Intelligence literature suggests epistemic communities should be diverse, egalitarian, and open to a wide variety of information sources. EA, in contrast, is relatively homogenous, hierarchical, and insular. This puts EA at serious risk of epistemic blind-spots.

EA highly values epistemics and has a stated ambition of predicting existential risk scenarios. We have a reputation for assuming that we are the “smartest people in the room”.

Yet, we appear to have been blindsided by the FTX crash. As Tyler Cowen puts it:

Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant. And, it turns out, many of the real sources of existential risk boil down to hubris and human frailty and imperfections (the humanities remain underrated). When it comes to existential risk, I generally prefer to invest in talent and good institutions, rather than trying to fine-tune predictions about existential risk itself.

If EA is going to do some lesson-taking, I would not want this point to be neglected.

So, what’s the problem?

EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leaders”. An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]

The field of Collective Intelligence provides guidance on the traits to nurture if one wishes to build a collectively intelligent community. For example:

  • Diversity

    • Along essentially all a wide variety of dimensions, from cultural background to disciplinary/​professional training to cognition style to age

  • Egalitarianism

    • People must feel able to speak up (and must be listened to if they do)

    • Dominance dynamics amplify biases and steer groups into suboptimal path dependencies

    • Leadership is typically best employed on a rotating basis for discussion-facilitation purposes rather than top-down decision-making

    • Avoid appeals and deference to community authority

  • Openness to a wide variety of sources of information

  • Generally high levels of social/​emotional intelligence

    • This is often more important than individuals’ skill levels at the task in question

However, the social epistemics of EA leave much to be desired. As we will elaborate on below, EA:

  • Is mostly comprised of people with very similar demographic, cultural, and educational backgrounds

  • Places too much trust in (powerful) leadership figures

  • Is remarkably intellectually insular

  • Confuses value-alignment and seniority with expertise

  • Is vulnerable to motivated reasoning

  • Is susceptible to conflicts of interest

  • Has powerful structural barriers to raising important categories of critique

  • Is susceptible to groupthink

Decision-making structures and intellectual norms within EA must therefore be improved upon.[5]

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public

    • You are not alone, you are not crazy!

    • There is a much greater diversity of opinion in this community than you might think

    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!

  • EA must be open to deep critiques as well as shallow critiques

    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”

    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided

    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique

    • When we reject critiques, we should present our reasons for doing so

  • EAs should read more deep critiques of EA, especially external ones

    • For instance this blog and this forthcoming book

  • EA should cut down its overall level of tone/​language policing

    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”

    • Civility must not be confused with EA ingroup signalling

    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers

  • EAs should make a conscious effort to avoid (subconsciously/​inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism

    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about

    • “If we are so open to critique, shouldn’t we be open to this one?”

    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them

  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content

  • EAs, especially those in community-building roles, should send credible/​costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community

  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity

  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations

  • EA institutions and community groups should run discussion groups and/​or event programmes on how to do EA better

Epistemics

General

  • EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics

    • As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness

  • EAs should practise epistemic modesty

    • We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community

    • We should avoid assuming that EA/​Rationalist ways of thinking are the only or best ways

    • We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/​assumptions/​characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)

    • We should stop assuming that we are smarter than everybody else

  • EAs should consciously separate:

    • An individual’s suitability for a particular project, job, or role

    • Their expertise and skill in the relevant area(s)

    • The degree to which they are perceived to be “highly intelligent”

    • Their perceived level of value-alignment with EA orthodoxy

    • Their seniority within the EA community

    • Their personal wealth and/​or power

  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/​“heretical” views

  • When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions

Diversity

  • EA institutions should select for diversity

    • With respect to:

      • Hiring (especially grantmakers and other positions of power)

      • Funding sources and recipients

      • Community outreach/​recruitment

    • Along lines of:

      • Academic discipline

      • Educational & professional background

      • Personal background (class, race, nationality, gender, etc.)

      • Philosophical and political beliefs

    • Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse

  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”

  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good

  • People with heterodox/​“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view

  • Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made

  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups

  • A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time

  • There should be a survey on cognitive/​intellectual diversity within EA

  • EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA

Openness

  • Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)

  • Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:

    • Subject-matter experts from outside EA

    • Researchers, practitioners, and stakeholders from outside of our elite communities

      • For instance, we need a far greater input from people from Indigenous communities and the Global South

  • External speakers/​academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs

  • EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers

    • Not just to respond!

  • EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA

Expertise & Rigour

Rigour

  • Work should be judged on its quality, rather than the perceived intelligence, seniority or value-alignment of its author

    • EAs should avoid assuming that research by EAs will be better than research by non-EAs by default

Reading

  • Insofar as a “canon” is created, it should be of the best-quality works on a given topic, not the best works by (orthodox) EAs about (orthodox) EA approaches to the topic

    • Reading lists, fellowship curricula, and bibliographies should be radically diversified

    • We should search everywhere for pertinent content, not just the EA Forum, LessWrong, and the websites of EA orgs

    • We should not be afraid of consulting outside experts, both to improve content/​framing and to discover blind-spots

  • EAs should see fellowships as educational activities first and foremost, not just recruitment tools

  • EAs should continue creating original fellowship ideas for university groups

  • EAs should be more willing to read books and academic papers

Good Science

  • EAs should be curious about why communities with decades of experience studying problems (similar to the ones) we study do things the ways that they do

Experts & Expertise

  • EAs should deliberately broaden their social/​professional circles to include external domain-experts with differing views

Funding & Employment

Grantmaking

  • Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/​”heretical” views

Governance & Hierarchy

Leadership

  • EAs should avoid hero-worshipping prominent EAs, and be willing to call it out among our peers

    • We should be able to openly critique senior members of the community, and avoid knee-jerk defence/​deference when they are criticised

  • EA leaders should take active steps to minimise the degree of hero-worship they might face

    • For instance, when EA books or sections of books are co-written by several authors, co-authors should be given appropriate attribution

  • EAs should deliberately platform less well-known EAs in media work

  • EAs should assume that power corrupts, and EAs in positions of power should take active steps to:

    • Distribute and constrain their own power as a costly signal of commitment to EA ideas rather than their position

    • Minimise the corrupting influence of the power they retain and send significant costly signals to this effect

  • Fireside chats with leaders at EAG events should be replaced with:

    • Panels/​discussions/​double-cruxing discussions involving a mix of:

      • Prominent EAs

      • Representatives of different EA organisations

      • Less well-known EAs

      • External domain-experts

    • Discussions between leaders and unknown EAs

Decentralisation

  • EA institutions should see EA ideas as things to be co-created with the membership and the wider world, rather than transmitted and controlled from the top down

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me