What is the theoretical basis for maximizing E[log(value)]?
For something like money, people have approximately logarithmic utility of money, so max E[utility] becomes max E[log(money)]. But if you’re maximizing the well-being of sentient individuals (or some close proxy like lived saved by bednets), then you’re already directly maximizing utility, so taking the logarithm wouldn’t make sense.
GiveWell’s expected value calculations already use logarithms in calculating the utility of money for cash transfers made by GiveDirectly. If you take the logarithm again, you are now maximizing E[log(log(money))], which doesn’t make sense.
I make a big assumption, that the utility gains are multiplied together. There is some basis to it like if there are some independent sources of fatality, the chance to survive all of them is the product of the survival chances for each fatality source.
If you want to maximise the result of the multiplication, then take the logarithm, and it turns into a sum. In that formulation, you can see that it’s not the absolute change that is important, but the relative one. Here I wanted to show an example of it, like a risky vs safe bet over 1 vs 50 year, but I kinda got stuck, and realized I don’t really understand it, so I retract, but thanks for the question.
What is the theoretical basis for maximizing E[log(value)]?
For something like money, people have approximately logarithmic utility of money, so max E[utility] becomes max E[log(money)]. But if you’re maximizing the well-being of sentient individuals (or some close proxy like lived saved by bednets), then you’re already directly maximizing utility, so taking the logarithm wouldn’t make sense.
GiveWell’s expected value calculations already use logarithms in calculating the utility of money for cash transfers made by GiveDirectly. If you take the logarithm again, you are now maximizing E[log(log(money))], which doesn’t make sense.
I make a big assumption, that the utility gains are multiplied together. There is some basis to it like if there are some independent sources of fatality, the chance to survive all of them is the product of the survival chances for each fatality source.
If you want to maximise the result of the multiplication, then take the logarithm, and it turns into a sum. In that formulation, you can see that it’s not the absolute change that is important, but the relative one. Here I wanted to show an example of it, like a risky vs safe bet over 1 vs 50 year, but I kinda got stuck, and realized I don’t really understand it, so I retract, but thanks for the question.