Given 2 distributions X1 and X2 which are independent estimates of the distribution X, this be estimated with the inverse-variance method from:
X=X1V(X1)+X2V(X2)1V(X1)+1V(X2).
Under which conditions is this a good aproach? For example, for which types of distributions? These questions might be relevant for determining:
A posterior distribution based on distributions for the prior and estimate.
A distribution which combines estimates of different theories.
Some notes:
The inverse-variance method minimises the variance of a weighted mean of X1 and X2.
Calculating E(X) and V(X) according to the above formula would result in a mean and variance equal to those derived in this analysis from Dario Amodei, which explains how to combine X1 and X2 following a Bayesian approach if these follow normal distributions.
[Question] When should the inverse-variance method be applied to distributions?
Given 2 distributions X1 and X2 which are independent estimates of the distribution X, this be estimated with the inverse-variance method from:
X=X1V(X1)+X2V(X2)1V(X1)+1V(X2).
Under which conditions is this a good aproach? For example, for which types of distributions? These questions might be relevant for determining:
A posterior distribution based on distributions for the prior and estimate.
A distribution which combines estimates of different theories.
Some notes:
The inverse-variance method minimises the variance of a weighted mean of X1 and X2.
Calculating E(X) and V(X) according to the above formula would result in a mean and variance equal to those derived in this analysis from Dario Amodei, which explains how to combine X1 and X2 following a Bayesian approach if these follow normal distributions.