The VOI simulation calculation discussion you give (“This expectation of value obtainable in the informed state) is super-interesting to me, not sure I’ve seen this laid out before, I love it!.
Maybe you could add 1 or 2 more lines or footnotes to make it a bit clearer; I had to think through ‘yeah these distributions represent our beliefs over the probability of each actual value for each charity … so sampling from these reflects what we can expect to learn.’ Maybe it just took me a moment because I didn’t start out as a Bayesian.
Also, I might state the VOI as
In the context of choosing among discrete alternatives, information is more valuable when:
It is more likely to change our mind about the best alternative the value of the alternatives we are choosing among is likely to be larger
The environment is high-stakes, implying a greater potential for producing value or harm, given the resources devoted to it.
I would want to disambiguate this so as to not be confused with a case where the context is high-stakes, but we are trying to choose between ‘purple bednets and pink bednets’ (nearly equivalent value)
The VOI simulation calculation discussion you give (“This expectation of value obtainable in the informed state) is super-interesting to me, not sure I’ve seen this laid out before, I love it!.
Maybe you could add 1 or 2 more lines or footnotes to make it a bit clearer; I had to think through ‘yeah these distributions represent our beliefs over the probability of each actual value for each charity … so sampling from these reflects what we can expect to learn.’ Maybe it just took me a moment because I didn’t start out as a Bayesian.
Also, I might state the VOI as
I would want to disambiguate this so as to not be confused with a case where the context is high-stakes, but we are trying to choose between ‘purple bednets and pink bednets’ (nearly equivalent value)
What do you think?