It seems bad if we’re basing how to do the most good on whims and biases.
I agree. However, in cases where priors are playing a crucial role, one should simply prioritise gathering more evidence until there is reasonable convergence about what to do (among a given group of people, for a particular decision)?
In some cases, we can’t gather strong enough evidence, say because:
they’re questions about very speculative or unprecedented possibilities and the evidence would either be too indirect and weak or come too late to be very action-guiding, e.g. often for AI risk, conscious subsystems, or
there will be too much noise or confounding, too small a sample size and anything like an RCT is too impractical (e.g. policy, corporate outreach) or wouldn’t generalize well, or
the disagreements are partly conceptual, definitional or philosophical, e.g. “What is consciousness?”, “What is the hedonic intensity of an experience?”
EDIT: generally, the window to intervene is too small to wait for the evidence.
In such cases, I think imprecise probabilities are the way to go to reduce arbitrariness. We can do sensitivity analysis. If whether the intervention looks good or bad overall depends highly on fairly arbitrary judgements or priors, we might disprefer it and prefer to support things that are more robustly positive. This is difference-making ambiguity aversion.
And/or we do can some kind of bracketing.
Also, you should think of research as an intervention itself that could backfire. Who could use the research, and could they use it in ways you’d judge as very negative? How likely is that? This will of course depend on the case and your own specific views.
The reasons you mentioned for gathering strong evidence not being possible (or being very difficult) apply to some extent to efforts increasing human welfare, but humans have probably still made progress on increasing human welfare over the past 200 years or so? Can one be confident similar progress cannot be extended to non-humans?
I agree research can backfire. However, at least historically, doing research on the sentience of animals, and on how to increase their welfare has mostly been beneficial for the target animals?
Hi Michael.
I agree. However, in cases where priors are playing a crucial role, one should simply prioritise gathering more evidence until there is reasonable convergence about what to do (among a given group of people, for a particular decision)?
In some cases, we can’t gather strong enough evidence, say because:
they’re questions about very speculative or unprecedented possibilities and the evidence would either be too indirect and weak or come too late to be very action-guiding, e.g. often for AI risk, conscious subsystems, or
there will be too much noise or confounding, too small a sample size and anything like an RCT is too impractical (e.g. policy, corporate outreach) or wouldn’t generalize well, or
the disagreements are partly conceptual, definitional or philosophical, e.g. “What is consciousness?”, “What is the hedonic intensity of an experience?”
EDIT: generally, the window to intervene is too small to wait for the evidence.
In such cases, I think imprecise probabilities are the way to go to reduce arbitrariness. We can do sensitivity analysis. If whether the intervention looks good or bad overall depends highly on fairly arbitrary judgements or priors, we might disprefer it and prefer to support things that are more robustly positive. This is difference-making ambiguity aversion.
And/or we do can some kind of bracketing.
Also, you should think of research as an intervention itself that could backfire. Who could use the research, and could they use it in ways you’d judge as very negative? How likely is that? This will of course depend on the case and your own specific views.
The reasons you mentioned for gathering strong evidence not being possible (or being very difficult) apply to some extent to efforts increasing human welfare, but humans have probably still made progress on increasing human welfare over the past 200 years or so? Can one be confident similar progress cannot be extended to non-humans?
I agree research can backfire. However, at least historically, doing research on the sentience of animals, and on how to increase their welfare has mostly been beneficial for the target animals?