the tables created in the web app are fully compatible with having a single unit.
For every single table, one could use a single line of the table to generate the rest of the table. Knowing value(item)value(reference) for all item, we can use value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference) to construct arbitrary entries.
The problem here are the correlations. The function you describe would work, if you kept correlations, but this would be very difficult.
In practice, when lists are done with respect to a single unit, the correlations / joint densities are basically never captured.
If you don’t capture the correlations, then the equation you provided would result in a value that is often much more uncertain than would actually be the case.
So my idea of ‘strict relative values’ turns out to be an illusory edge case if we use distributions and not numbers, and in practice we’ll usually be in the ‘generalized case’ anyway.
I fear, my not-grokking the implications remains. But at least, I don’t mistakenly think I fully understood the concept any more.
It is probably not worth the effort for you to teach me all about the approach, but I’ll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future
If someone hands me a table with distributions drawn on it, what exactly do I learn? What decisions do a make based in the table?
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
usually, xij will just give a more precise distribution than a distribution one would get from xik∘xkj[1]. But it could also turn out that the indirect path creates a more narrow, or an interestingly different distribution[2].
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution.
For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Yep, that’s basically it.
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
You need some programming infrastructure to do them. The Squiggle example I provided is one way of going about this. I’d flag that it would take some fiddling to do this in other languages.
If you try doing relative values “naively” (without functions), then I’d expect you’d run into issues when dealing with a lot of heterogenous kinds of value estimates. (Assuming you’d be trying to compare them all to each other). Single unit evaluations are fine for small lists of similar things.
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
After talking to GPT about this[1], I think that my concern is actually already covered by your
If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it’s often trivial or straightforward to make consistent tables.
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one’s values if done right, or overwhelming if done wrong.
[GPTs answer includes a summary, so I’ll leave out my prompt:] I see your point now. You’re referring to a potential inconsistency in the triangle inequality context for relative values. Let’s use a concrete example to explore this.
Suppose the relative value of ‘chocolate’ to ‘vanilla’ is given by a 90% credibility interval from 0.8 to 1.2, and ‘banana’ to ‘vanilla’ is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, ‘banana’ should have a relative value compared to ‘chocolate’ that lies within a certain range.
If, however, you then provide a wide 90% credibility interval for ‘chocolate’ to ‘banana’ (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with ‘vanilla’.
In this case, it may be that you need to revisit your estimates for ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’. If you feel a lot of uncertainty about ‘chocolate’ vs. ‘banana’, perhaps you also should feel more uncertainty about these options compared to ‘vanilla’ than you initially thought. You may have overestimated your confidence in these comparisons.
Alternatively, if you’re confident in your estimates of ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’, you may want to narrow down your 90% credibility interval for ‘chocolate’ vs. ‘banana’.
In any case, it’s a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it’s a sign that you may need to rethink some of your estimates.
Okay, so maybe relative values are a more straightforward concept than I thought/feared :)
Yea, I really don’t think they’re complicated conceptually, it’s just tricky to be explicit about. It’s a fairly simple format all things considered.
I think that using them in practice takes a little time to feel very comfortable. I imagine most users won’t need to think about a lot of the definitions that much.
The problem here are the correlations. The function you describe would work, if you kept correlations, but this would be very difficult.
In practice, when lists are done with respect to a single unit, the correlations / joint densities are basically never captured.
If you don’t capture the correlations, then the equation you provided would result in a value that is often much more uncertain than would actually be the case.
Ooh, that makes sense. Thanks!
So my idea of ‘strict relative values’ turns out to be an illusory edge case if we use distributions and not numbers, and in practice we’ll usually be in the ‘generalized case’ anyway.
I fear, my not-grokking the implications remains. But at least, I don’t mistakenly think I fully understood the concept any more.
It is probably not worth the effort for you to teach me all about the approach, but I’ll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future
If someone hands me a table with distributions drawn on it, what exactly do I learn? What decisions do a make based in the table?
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
usually, xij will just give a more precise distribution than a distribution one would get from xik∘xkj[1]. But it could also turn out that the indirect path creates a more narrow, or an interestingly different distribution[2].
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution.
For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly
Yep, that’s basically it.
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
You need some programming infrastructure to do them. The Squiggle example I provided is one way of going about this. I’d flag that it would take some fiddling to do this in other languages.
If you try doing relative values “naively” (without functions), then I’d expect you’d run into issues when dealing with a lot of heterogenous kinds of value estimates. (Assuming you’d be trying to compare them all to each other). Single unit evaluations are fine for small lists of similar things.
After talking to GPT about this[1], I think that my concern is actually already covered by your
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one’s values if done right, or overwhelming if done wrong.
[GPTs answer includes a summary, so I’ll leave out my prompt:] I see your point now. You’re referring to a potential inconsistency in the triangle inequality context for relative values. Let’s use a concrete example to explore this. Suppose the relative value of ‘chocolate’ to ‘vanilla’ is given by a 90% credibility interval from 0.8 to 1.2, and ‘banana’ to ‘vanilla’ is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, ‘banana’ should have a relative value compared to ‘chocolate’ that lies within a certain range. If, however, you then provide a wide 90% credibility interval for ‘chocolate’ to ‘banana’ (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with ‘vanilla’. In this case, it may be that you need to revisit your estimates for ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’. If you feel a lot of uncertainty about ‘chocolate’ vs. ‘banana’, perhaps you also should feel more uncertainty about these options compared to ‘vanilla’ than you initially thought. You may have overestimated your confidence in these comparisons. Alternatively, if you’re confident in your estimates of ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’, you may want to narrow down your 90% credibility interval for ‘chocolate’ vs. ‘banana’. In any case, it’s a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it’s a sign that you may need to rethink some of your estimates.
Thanks! I’ll reply in separate comments
Okay, so maybe relative values are a more straightforward concept than I thought/feared :)
Yea, I really don’t think they’re complicated conceptually, it’s just tricky to be explicit about. It’s a fairly simple format all things considered.
I think that using them in practice takes a little time to feel very comfortable. I imagine most users won’t need to think about a lot of the definitions that much.