EA’s efforts to maximise impact uses important concepts / thinking tools, including:
Maximisation
Expected value
The scientific method
Cost-efficiency
Hierarchies of evidence
The ITN Framework
Counterfactual reasoning
Use of probabilities derived from belief and expert opinion
Forecasting and modelling
Comparing point estimates with wide confidence intervals
Measuring and quantifying everything, even if there’s very high uncertainty involved
Cost-effectiveness bars
While these tools are used formally in write-ups and cost-effectiveness models, I think EAs also use these tools informally to make a variety of decisions.
As far as I’m aware, the only concept a where a weakness or criticism is widely known and widely pointed out, is expected value, where Pascal’s mugging comes up often.
Given that expected value has this weakness, I think
EA should seek out weaknesses of other thinking tools we use in EA, and maybe pay philosophers, statisticians and economists to make these weakness clearer to EAs.
More EAs should become aware of the weaknesses of other thinking tools often used in EA.
For example,
Givewell has written about how expected value also fails to consider evidence quality
EA should seek out more criticism of key EA concepts
EA’s efforts to maximise impact uses important concepts / thinking tools, including:
Maximisation
Expected value
The scientific method
Cost-efficiency
Hierarchies of evidence
The ITN Framework
Counterfactual reasoning
Use of probabilities derived from belief and expert opinion
Forecasting and modelling
Comparing point estimates with wide confidence intervals
Measuring and quantifying everything, even if there’s very high uncertainty involved
Cost-effectiveness bars
While these tools are used formally in write-ups and cost-effectiveness models, I think EAs also use these tools informally to make a variety of decisions.
As far as I’m aware, the only concept a where a weakness or criticism is widely known and widely pointed out, is expected value, where Pascal’s mugging comes up often.
Given that expected value has this weakness, I think
EA should seek out weaknesses of other thinking tools we use in EA, and maybe pay philosophers, statisticians and economists to make these weakness clearer to EAs.
More EAs should become aware of the weaknesses of other thinking tools often used in EA.
For example,
Givewell has written about how expected value also fails to consider evidence quality
There’s this Wikipedia section of criticisms of various hierarchies of evidence