An epistemology for effective altruism?

At 80,000 Hours, we want to be transparent about our research process. So, I had a go at listing the key principles that guide our research.

I thought it might be interesting to the forum as a take on an epistemology for effective altruism i.e. what principles should EAs use to make judgements about which causes to support, which careers to take, which charities to donate to and so on?

I’m interested to hear your ideas on (i) which principles you disagree with (ii) which principles we’ve missed.

See the original page here.


What evidence do we consider?

Use of scientific literature

We place relatively high weight on what scientific literature says about a question, when applicable. If there is relevant scientific literature, we start our inquiry by doing a literature search.

Expert common sense

When we first encounter a question, our initial aim is normally to work out: (i) who are the relevant experts? (ii) what would they say about this question? We call what they would say ‘expert common sense’, and we think it often forms a good starting position (more). We try not to deviate from expert common sense unless we have an account of why it’s wrong.

Quantification

Which careers make the most difference can be unintuitive, since it’s difficult to grasp the scale and scope of different problems, which often differ by orders of magnitude. This makes it important to attempt to quantify and model key factors when possible. The process of quantification is also often valuable for learning more about an issue, and making your reasoning transparent to others. However, we recognise that for most questions we care about, quantified models contain huge (often unknown) uncertainties, and therefore, should not be followed blindly. We always weigh the results of quantified models against their robustness compared to qualitative analysis and common sense.

The experience of the people we coach

We’ve coached hundreds of people on career decisions and have a wider network of people we gather information from who are aligned with our mission. We place weight on their thoughts about the pros and cons of different areas.

How do we combine evidence?

We strive to be Bayesian

We attempt to explicitly clarify our prior guess on an issue, and then update in favor or out of favor based on the strength of our evidence for or against. See an example here. This is called ‘Bayesian reasoning’, and, although not always it adopted, seems to be regarded as best practice for decision making under high uncertainty among those who write about good decision making process.1

We use ‘cluster thinking’

As opposed to relying on one or two strong considerations, we seek to evaluate the question from many angles, weighting each perspective according to its robustness and the importance of the consequences. We think this process provides more robust answers in the context of decision making under high uncertainty than alternatives (such as making a simple quantified model and going with the answer). This style of thinking has been supported by various groups and has several names, including ‘cluster thinking’, ‘model combination and adjustment’, ‘many weak arguments’, and ‘fox style’ thinking.

We seek to make this process transparent by listing the main perspectives we’ve considered on a question. We also make regular use of structured qualitative evaluations, such as our framework.

We seek robustly good paths

Our aim is to make good decisions. Since the future is unpredictable and full of unknown unknowns, and we’re uncertain about many things, we seek actions that will turn out to be good under many future scenarios.

Avoiding bias

We’re very aware of the potential for bias in our work, which often relies on difficult judgement calls, and have surveyed the literature on biases in career decisions. To avoid bias, we aim to make our research highly transparent, so that bias is easier to spot. We also aim to state our initial position, so that readers can see the direction in which we’re most likely to be biased, and write about why we might be wrong.

Seeking feedback

We see all of our work as in progress, and seek to improve it by continually seeking feedback.
We seek feedback through several channels:

  • All research is vetted within the team.

  • For major research, we’ll send it to external researchers and people with experience in the area for comments.

  • We aim to publish all of our substantial research publicly on our blog.

  • Blog posts are rated by a group of external raters.

In the future, we intend to carry out internal and external research evaluations.

We aim to make our substantial pieces of research easy to critique by:

  • Clearly explaining our reasoning and evidence. If you see a claim that isn’t backed up by a link or citation, you can assume there’s no further justification.

  • Flagging judgement calls.

  • Giving an overview of our research process.

  • Stating our key uncertainties.