The value ratio table, as shown, is a presentation/visualization of the utility function (assuming you have joint distributions).
The key question is how to store the information within the utility function.
It’s really messy to try to store meaningful joint distributions in regular ways, especially if you want to approximate said distributions using multiple pieces. It’s especially to do this with multiple people, because then they would need to coordinate to ensure they are using the right scales.
The value ratio functions are basically one specific way to store/organize and think about this information. I think this is feasible to work with, in order to approximate large utility functions without too many trade-offs.
“Joint distributions on values where the scales are arbitrary” seem difficult to intuit/understand, so I think that typically representing them as ratios is a useful practice.
So constructing a value ratio table means estimating a joint distribution of values from a subset of pairwise comparisons, then sampling from the distribution to fill out the table?
In that case, I think estimating the distribution is the hard part. Your example is straightforward because it features independent estimates, or simple functional relationships.
Estimation is actually pretty easy (using linear regression), and is essentially a solved problem since 1952. Scheffé, H. (1952). An Analysis of Variance for Paired Comparisons. Journal of the American Statistical Association, 47(259), 381–400. https://doi.org/10.1080/01621459.1952.10501179
I wrote about the methodology (before finding Scheffé′s paper) here.
I can see how this gets you E(valuei|comparisons) for each each item i, but not P((valuei)i∈items|comparisons). One of the advantages Ozzie raises is the possibility to keep track of correlations in value estimates, which requires more than the marginal expectations.
I’m not sure what you mean. I’m thinking about pairwise comparisons in the following way.
(a) Every pair of items i,j have a true ratio of expectations E(Xi)/E(Xj)=μij. I hope this is uncontroversial.
(b) We observe the variables Rij according to logRij∼logμij+ϵij for some some normally distributed ϵij. Error terms might be dependent, but that complicates the analysis. (And is most likely not worth it.) This step could be more controversial, as there are other possible models to use.
Note that you will get a distribution over every E(Xi) too with this approach, but that would be in the Bayesian sense, i.e., p(E(Xi)∣comparisons), when we have a prior over E(Xi).
The value ratio table, as shown, is a presentation/visualization of the utility function (assuming you have joint distributions).
The key question is how to store the information within the utility function.
It’s really messy to try to store meaningful joint distributions in regular ways, especially if you want to approximate said distributions using multiple pieces. It’s especially to do this with multiple people, because then they would need to coordinate to ensure they are using the right scales.
The value ratio functions are basically one specific way to store/organize and think about this information. I think this is feasible to work with, in order to approximate large utility functions without too many trade-offs.
“Joint distributions on values where the scales are arbitrary” seem difficult to intuit/understand, so I think that typically representing them as ratios is a useful practice.
So constructing a value ratio table means estimating a joint distribution of values from a subset of pairwise comparisons, then sampling from the distribution to fill out the table?
In that case, I think estimating the distribution is the hard part. Your example is straightforward because it features independent estimates, or simple functional relationships.
Estimation is actually pretty easy (using linear regression), and is essentially a solved problem since 1952. Scheffé, H. (1952). An Analysis of Variance for Paired Comparisons. Journal of the American Statistical Association, 47(259), 381–400. https://doi.org/10.1080/01621459.1952.10501179
I wrote about the methodology (before finding Scheffé′s paper) here.
I can see how this gets you E(valuei|comparisons) for each each item i, but not P((valuei)i∈items|comparisons). One of the advantages Ozzie raises is the possibility to keep track of correlations in value estimates, which requires more than the marginal expectations.
I’m not sure what you mean. I’m thinking about pairwise comparisons in the following way.
(a) Every pair of items i,j have a true ratio of expectations E(Xi)/E(Xj)=μij. I hope this is uncontroversial. (b) We observe the variables Rij according to logRij∼logμij+ϵij for some some normally distributed ϵij. Error terms might be dependent, but that complicates the analysis. (And is most likely not worth it.) This step could be more controversial, as there are other possible models to use.
Note that you will get a distribution over every E(Xi) too with this approach, but that would be in the Bayesian sense, i.e., p(E(Xi)∣comparisons), when we have a prior over E(Xi).