exploring sensitivity to how answers would look if the deworming RCT results had been higher or lower and that they change sensibly?
Do you just mean that the change in the posterior expectation is in the correct direction? In that case, we know the answer from theory: yes, for any prior and a wide range of likelihood functions.
Andrews et al. 1972 (Lemma 1) shows that when the signal B is normally distributed, with mean T, then, for any prior distribution over T, E[T|B=b] is increasing in b.
This was generalised by Ma 1999 (Corollary 1.3) to any likelihood function arising from a B that (i) has T as a location parameter, and (ii) is strongly unimodally distributed.
I guess it depends on what the “correct direction” is thought to be. From the reasoning quoted in my first post, it could be the case that as the study result becomes larger the posterior expectation should actually reduce. It’s not inconceivable that as we saw the estimate go to infinity, we should start reasoning that the study is so ridiculous as to be uninformative and so not the posterior update becomes smaller. But I don’t know. What you say seems to suggest that Bayesian reasoning could only do that for rather specific choices of likelihood functions, which is interesting.
Do you just mean that the change in the posterior expectation is in the correct direction? In that case, we know the answer from theory: yes, for any prior and a wide range of likelihood functions.
Andrews et al. 1972 (Lemma 1) shows that when the signal
B
is normally distributed, with meanT
, then, for any prior distribution overT
,E[T|B=b]
is increasing inb
.This was generalised by Ma 1999 (Corollary 1.3) to any likelihood function arising from a
B
that (i) hasT
as a location parameter, and (ii) is strongly unimodally distributed.I guess it depends on what the “correct direction” is thought to be. From the reasoning quoted in my first post, it could be the case that as the study result becomes larger the posterior expectation should actually reduce. It’s not inconceivable that as we saw the estimate go to infinity, we should start reasoning that the study is so ridiculous as to be uninformative and so not the posterior update becomes smaller. But I don’t know. What you say seems to suggest that Bayesian reasoning could only do that for rather specific choices of likelihood functions, which is interesting.