If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it’s often trivial or straightforward to make consistent tables.
> I think that this is actually the additional information which having such a table adds compared to using a single central unit of comparison. If there were no path dependency, the table would be redundant and could be replaced by a single central unit (= any single line of the table). This makes me extra curious about the question of what this “extra information” really means?
I think you might not quite yet grok the main benefits of relative values I’m trying to get at. I’ve had a hard time explaining them. It’s possible that going through the web app, especially with the video demo, would help.
Single tables could work for very similar kinds of items, but have a lot of trouble with heterogeneous items. There’s often no unit that’s a good fit for everything. If you were to try to put things into one table, you’d get the problems I flag in the two thought experiments.
> Possibly something like this is the best we can do as long as we cannot define an explicit utility function To be clear, relative values, as I suggest, are basically more explicit than utility functions, not less. You still create explicit utility functions, but there’s better support for appreciating some uncertain combinations, while storing other signal.
I think you might not quite yet grok the main benefits of relative values
Thanks for your reply, you are probably right.
Let my share my second attempt of understanding relative values after going through the web app.
‘strict’ relative values
If I did not overlook some part in the code, the tables created in the web app are fully compatible with having a single unit.
For every single table, one could use a single line of the table to generate the rest of the table. Knowing value(item)value(reference) for all item, we can use value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference) to construct arbitrary entries.
Between the different tables, one would need to add a single translation factor which one could then use to merge the tables to a big single table.
Without such a translation factor, the tables would remain disconnected (there could be a single unit for all tables, but it is not specified). Still, the tables could still be used to make meaningful decisions inside of the scope of each table.
If this is the intent of how relative values are meant to be used, my impression of their advantages is:
they are, in principle, compatible with a single value/utility function. One does not need to change one’s philosophy at all when switching over from using a single unit for measuring value.
they allow for a more natural thought process when exploring the value of interventions
one can use crisply defined units at each step of one’s research: “Person in city x of income y gets $1” can be distinguished from “Person in city x of income y gets $5″ as necessary.
throughout the process, one will tend to work ‘bottom-up’ or ‘top-down’, that is for bottom-up, start out with very specific value-measures and expand their connections (via relative values / translation factors) to more and more abstract/general values (such as maybe WELLBYs)
If one feels that there is an unbridgeable gap between two currently non-connected groups of values, one can keep them as separate value tables and decide to add the connection some time in the future
thanks to using distributions, one can also decide to add a connection and use a very high uncertainty instead.
This version of relative values (let’s call it “strictly coherent relative values according to Mart’s understanding v2” or “strict relative values” for short) feels quite intuitive to me and also seems significantly similar to how givewell’s current cost-effectiveness analyses are done (except that they do not create a value table with all-to-all translations and there being no/fewer distributions[1].)
Your link to the usage of relative values in Finance seems to me to be compatible with this definition of relative values.
Beyond ‘strict’ relative values
But, from reading your OP (and the recommended section of the video), my impression is that relative values are intended to be used to describe situations more general than my “strict relative values”.
and also David Johnston’s comments seem to refer to a much more general case.
For this more general version my ‘strictness’ equation
value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference)
would typically not be valid. Translated into David’s notation, the ‘strictness’ equation would be
xij=x0j/x0i
where 0 is the reference value, and xij are the relative values comparing i and j.
David’s
Note that, under this interpretation, we should not expect xij=1/xji,unlessi=j$. This is because items have different values in different contexts.
In such a generalized case, I think that the philosophical status of what entries mean is much more complicated.
I do not have a grasp on what the added degrees of freedom do and why it is good to have them. In my last comment, I kind of assumed that any deviation from strictness would be “irrational inconsistency” by definition. But maybe I am just missing the relevant background and this really does capture something important?
the tables created in the web app are fully compatible with having a single unit.
For every single table, one could use a single line of the table to generate the rest of the table. Knowing value(item)value(reference) for all item, we can use value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference) to construct arbitrary entries.
The problem here are the correlations. The function you describe would work, if you kept correlations, but this would be very difficult.
In practice, when lists are done with respect to a single unit, the correlations / joint densities are basically never captured.
If you don’t capture the correlations, then the equation you provided would result in a value that is often much more uncertain than would actually be the case.
So my idea of ‘strict relative values’ turns out to be an illusory edge case if we use distributions and not numbers, and in practice we’ll usually be in the ‘generalized case’ anyway.
I fear, my not-grokking the implications remains. But at least, I don’t mistakenly think I fully understood the concept any more.
It is probably not worth the effort for you to teach me all about the approach, but I’ll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future
If someone hands me a table with distributions drawn on it, what exactly do I learn? What decisions do a make based in the table?
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
usually, xij will just give a more precise distribution than a distribution one would get from xik∘xkj[1]. But it could also turn out that the indirect path creates a more narrow, or an interestingly different distribution[2].
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution.
For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Yep, that’s basically it.
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
You need some programming infrastructure to do them. The Squiggle example I provided is one way of going about this. I’d flag that it would take some fiddling to do this in other languages.
If you try doing relative values “naively” (without functions), then I’d expect you’d run into issues when dealing with a lot of heterogenous kinds of value estimates. (Assuming you’d be trying to compare them all to each other). Single unit evaluations are fine for small lists of similar things.
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
After talking to GPT about this[1], I think that my concern is actually already covered by your
If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it’s often trivial or straightforward to make consistent tables.
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one’s values if done right, or overwhelming if done wrong.
[GPTs answer includes a summary, so I’ll leave out my prompt:] I see your point now. You’re referring to a potential inconsistency in the triangle inequality context for relative values. Let’s use a concrete example to explore this.
Suppose the relative value of ‘chocolate’ to ‘vanilla’ is given by a 90% credibility interval from 0.8 to 1.2, and ‘banana’ to ‘vanilla’ is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, ‘banana’ should have a relative value compared to ‘chocolate’ that lies within a certain range.
If, however, you then provide a wide 90% credibility interval for ‘chocolate’ to ‘banana’ (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with ‘vanilla’.
In this case, it may be that you need to revisit your estimates for ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’. If you feel a lot of uncertainty about ‘chocolate’ vs. ‘banana’, perhaps you also should feel more uncertainty about these options compared to ‘vanilla’ than you initially thought. You may have overestimated your confidence in these comparisons.
Alternatively, if you’re confident in your estimates of ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’, you may want to narrow down your 90% credibility interval for ‘chocolate’ vs. ‘banana’.
In any case, it’s a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it’s a sign that you may need to rethink some of your estimates.
Okay, so maybe relative values are a more straightforward concept than I thought/feared :)
Yea, I really don’t think they’re complicated conceptually, it’s just tricky to be explicit about. It’s a fairly simple format all things considered.
I think that using them in practice takes a little time to feel very comfortable. I imagine most users won’t need to think about a lot of the definitions that much.
Thanks for the comment!
If people were doing it by hand, there could be contradictory properties, as you mention. But with programming, which we likely want anyway, it’s often trivial or straightforward to make consistent tables.
> I think that this is actually the additional information which having such a table adds compared to using a single central unit of comparison. If there were no path dependency, the table would be redundant and could be replaced by a single central unit (= any single line of the table). This makes me extra curious about the question of what this “extra information” really means?
I think you might not quite yet grok the main benefits of relative values I’m trying to get at. I’ve had a hard time explaining them. It’s possible that going through the web app, especially with the video demo, would help.
Single tables could work for very similar kinds of items, but have a lot of trouble with heterogeneous items. There’s often no unit that’s a good fit for everything. If you were to try to put things into one table, you’d get the problems I flag in the two thought experiments.
> Possibly something like this is the best we can do as long as we cannot define an explicit utility function
To be clear, relative values, as I suggest, are basically more explicit than utility functions, not less. You still create explicit utility functions, but there’s better support for appreciating some uncertain combinations, while storing other signal.
Thanks for your reply, you are probably right. Let my share my second attempt of understanding relative values after going through the web app.
‘strict’ relative values
If I did not overlook some part in the code, the tables created in the web app are fully compatible with having a single unit.
For every single table, one could use a single line of the table to generate the rest of the table. Knowing value(item)value(reference) for all item, we can use value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference) to construct arbitrary entries.
Between the different tables, one would need to add a single translation factor which one could then use to merge the tables to a big single table.
Without such a translation factor, the tables would remain disconnected (there could be a single unit for all tables, but it is not specified). Still, the tables could still be used to make meaningful decisions inside of the scope of each table.
If this is the intent of how relative values are meant to be used, my impression of their advantages is:
they are, in principle, compatible with a single value/utility function. One does not need to change one’s philosophy at all when switching over from using a single unit for measuring value.
they allow for a more natural thought process when exploring the value of interventions
one can use crisply defined units at each step of one’s research: “Person in city x of income y gets $1” can be distinguished from “Person in city x of income y gets $5″ as necessary.
throughout the process, one will tend to work ‘bottom-up’ or ‘top-down’, that is for bottom-up, start out with very specific value-measures and expand their connections (via relative values / translation factors) to more and more abstract/general values (such as maybe WELLBYs)
If one feels that there is an unbridgeable gap between two currently non-connected groups of values, one can keep them as separate value tables and decide to add the connection some time in the future
thanks to using distributions, one can also decide to add a connection and use a very high uncertainty instead.
This version of relative values (let’s call it “strictly coherent relative values according to Mart’s understanding v2” or “strict relative values” for short) feels quite intuitive to me and also seems significantly similar to how givewell’s current cost-effectiveness analyses are done (except that they do not create a value table with all-to-all translations and there being no/fewer distributions[1].)
Your link to the usage of relative values in Finance seems to me to be compatible with this definition of relative values.
Beyond ‘strict’ relative values
But, from reading your OP (and the recommended section of the video), my impression is that relative values are intended to be used to describe situations more general than my “strict relative values”.
Your
and also David Johnston’s comments seem to refer to a much more general case.
For this more general version my ‘strictness’ equation value(item1)value(item2)=value(item1)value(reference)/value(item2)value(reference) would typically not be valid. Translated into David’s notation, the ‘strictness’ equation would be xij=x0j/x0i where 0 is the reference value, and xij are the relative values comparing i and j.
David’s
is clearly not compatible with ‘strictness’ [2].
In such a generalized case, I think that the philosophical status of what entries mean is much more complicated. I do not have a grasp on what the added degrees of freedom do and why it is good to have them. In my last comment, I kind of assumed that any deviation from strictness would be “irrational inconsistency” by definition. But maybe I am just missing the relevant background and this really does capture something important?
This impression is based on the 2023 spreadsheet. This might well be a mistaken impression
Proof: Insert xij and xji into the ‘strictness equation’ and see that the results are the reciprocals of each other
The problem here are the correlations. The function you describe would work, if you kept correlations, but this would be very difficult.
In practice, when lists are done with respect to a single unit, the correlations / joint densities are basically never captured.
If you don’t capture the correlations, then the equation you provided would result in a value that is often much more uncertain than would actually be the case.
Ooh, that makes sense. Thanks!
So my idea of ‘strict relative values’ turns out to be an illusory edge case if we use distributions and not numbers, and in practice we’ll usually be in the ‘generalized case’ anyway.
I fear, my not-grokking the implications remains. But at least, I don’t mistakenly think I fully understood the concept any more.
It is probably not worth the effort for you to teach me all about the approach, but I’ll still summarize some of my remaining questions. Possibly my confusions will be shared by others who try to understand/apply relative value functions in the future
If someone hands me a table with distributions drawn on it, what exactly do I learn? What decisions do a make based in the table?
Is the meaning of each entry “How many times more value is there in item1 than in item2? (Provide a distribution)”?
Would one only use ‘direct steps’ in decision-making? How is “path dependency” interpreted?
usually, xij will just give a more precise distribution than a distribution one would get from xik∘xkj[1]. But it could also turn out that the indirect path creates a more narrow, or an interestingly different distribution[2].
what is the necessary knowledge for people who want to use relative value functions? Can I do worse compared to using a single unit by using relative values naively?
As you write, this is not really well-defined as one would need correlations to combine the distributions perfectly. But there should still be some bounds one could get on the outcome distribution.
For example, it might totally happen that I feel comfortable with giving precise monetary values to some things I enjoy, but feel much less certain if I try to compare them directly
Yep, that’s basically it.
I’m not sure what you are referring to here. I would flag that the relative value type specification is very narrow—it just states how valuable things are, not the “path of impact” or anything like that.
You need some programming infrastructure to do them. The Squiggle example I provided is one way of going about this. I’d flag that it would take some fiddling to do this in other languages.
If you try doing relative values “naively” (without functions), then I’d expect you’d run into issues when dealing with a lot of heterogenous kinds of value estimates. (Assuming you’d be trying to compare them all to each other). Single unit evaluations are fine for small lists of similar things.
After talking to GPT about this[1], I think that my concern is actually already covered by your
and could be addressed to a large degree with a few automatized checks and a user interface (one could even auto-fill the table given the first line of entries by assuming ~maximum resulting uncertainty for the unknown correlations). I feel like this could be really helpful for reflecting on one’s values if done right, or overwhelming if done wrong.
[GPTs answer includes a summary, so I’ll leave out my prompt:] I see your point now. You’re referring to a potential inconsistency in the triangle inequality context for relative values. Let’s use a concrete example to explore this. Suppose the relative value of ‘chocolate’ to ‘vanilla’ is given by a 90% credibility interval from 0.8 to 1.2, and ‘banana’ to ‘vanilla’ is from 1.0 to 1.5. This would imply that, based on your preferences for chocolate and banana ice cream relative to vanilla, ‘banana’ should have a relative value compared to ‘chocolate’ that lies within a certain range. If, however, you then provide a wide 90% credibility interval for ‘chocolate’ to ‘banana’ (e.g., 0.2 to 5), it may seem inconsistent with the narrower ranges implied by the comparisons with ‘vanilla’. In this case, it may be that you need to revisit your estimates for ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’. If you feel a lot of uncertainty about ‘chocolate’ vs. ‘banana’, perhaps you also should feel more uncertainty about these options compared to ‘vanilla’ than you initially thought. You may have overestimated your confidence in these comparisons. Alternatively, if you’re confident in your estimates of ‘chocolate’ to ‘vanilla’ and ‘banana’ to ‘vanilla’, you may want to narrow down your 90% credibility interval for ‘chocolate’ vs. ‘banana’. In any case, it’s a good point to keep in mind when building a relative value table. You want to be consistent in your uncertainty estimates across different comparisons. If there seems to be a contradiction, it’s a sign that you may need to rethink some of your estimates.
Thanks! I’ll reply in separate comments
Okay, so maybe relative values are a more straightforward concept than I thought/feared :)
Yea, I really don’t think they’re complicated conceptually, it’s just tricky to be explicit about. It’s a fairly simple format all things considered.
I think that using them in practice takes a little time to feel very comfortable. I imagine most users won’t need to think about a lot of the definitions that much.