I don’t think I follow. Monte Carlo sampling is done from a distribution, which I assume you want to use as the basis of your likelihood function? In this case, you can just calculate the likelihood function from this distribution, and combine it with your prior to get a posterior distribution.
I was thinking about cases in which X1 and X2 are non-linear functions of arrays of Monte Carlo samples generated from distributions of different types (e.g. loguniform and lognormal). To calculate E(X1), I can simply compute the mean of the elements of X1. I was looking for a similar simple formula to combine X1 and X2, without having to work with the original distributions used to compute X1 and X2.
A concrete simple example would be combining the following:
According to estimate 1, X is as likely to be 1, 3, 4, 6 or 8: X1 = [1, 2, 3, 4, 5].
According to estimate 2, X is as likely to be 2, 4, 6, 8 or 10: X2 = [2, 4, 6, 8, 10].
The generation mechanisms of estimates 1 and 2 are not known.
How are both X1 and X2 estimates of X when they are different distributions? At this point I am out of my depth so I do not have an informative answer for you.
X1 could be a distribution fitted to 3 quantiles predicted for X by forecaster A (as in Metaculus’ questions which do not involve forecasting probabilities).
X2 could be a distribution fitted to 3 quantiles predicted for X by forecaster B.
Meanwhile, I have realised the inverse-variance method minimises the variance of a weighted mean of X1 and X2 (and have updated the question above to reflect this).
I don’t think I follow. Monte Carlo sampling is done from a distribution, which I assume you want to use as the basis of your likelihood function? In this case, you can just calculate the likelihood function from this distribution, and combine it with your prior to get a posterior distribution.
I was thinking about cases in which X1 and X2 are non-linear functions of arrays of Monte Carlo samples generated from distributions of different types (e.g. loguniform and lognormal). To calculate E(X1), I can simply compute the mean of the elements of X1. I was looking for a similar simple formula to combine X1 and X2, without having to work with the original distributions used to compute X1 and X2.
A concrete simple example would be combining the following:
According to estimate 1, X is as likely to be 1, 3, 4, 6 or 8: X1 = [1, 2, 3, 4, 5].
According to estimate 2, X is as likely to be 2, 4, 6, 8 or 10: X2 = [2, 4, 6, 8, 10].
The generation mechanisms of estimates 1 and 2 are not known.
How are both X1 and X2 estimates of X when they are different distributions? At this point I am out of my depth so I do not have an informative answer for you.
I will try to illustrate what I mean with an example:
X could be the total number of confirmed and suspected monkeypox cases in Europe as of July 1, 2022.
X1 could be a distribution fitted to 3 quantiles predicted for X by forecaster A (as in Metaculus’ questions which do not involve forecasting probabilities).
X2 could be a distribution fitted to 3 quantiles predicted for X by forecaster B.
Meanwhile, I have realised the inverse-variance method minimises the variance of a weighted mean of X1 and X2 (and have updated the question above to reflect this).