I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.
I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesnât apply for estimating the impact of randomly selected actions.
This is a good point. Somehow, I think youâd want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I donât know how much you should adjust these. Maybe thereâs a way to model this explicitly, but it seems like youâd be trying to model your selection process itself before youâve defined it, and then you look for a selection process which satisfies some properties.
You might also want to spend more effort looking for arguments and evidence against each option the more options youâre considering.
When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficerâs curse if youâre using cutoffs).
What do you mean by âthe priorsâ?
If I havenât decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
Somehow, I think youâd want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good.
Thatâs the basic idea given by Muelhauser. Corrected posterior EV estimates.
You might also want to spend more effort looking for arguments and evidence against each option the more options youâre considering.
As opposed to equal effort for and against? OK, Iâm satisfied. However, if Iâve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.
As I recall, that post found that randomized funding doesnât make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.
or spread funding further
I donât see how that would improve funding outcomes.
If I havenât decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.
Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore itâs not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.
I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.
This is a good point. Somehow, I think youâd want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I donât know how much you should adjust these. Maybe thereâs a way to model this explicitly, but it seems like youâd be trying to model your selection process itself before youâve defined it, and then you look for a selection process which satisfies some properties.
You might also want to spend more effort looking for arguments and evidence against each option the more options youâre considering.
When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficerâs curse if youâre using cutoffs).
If I havenât decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
Thatâs the basic idea given by Muelhauser. Corrected posterior EV estimates.
As opposed to equal effort for and against? OK, Iâm satisfied. However, if Iâve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.
As I recall, that post found that randomized funding doesnât make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.
I donât see how that would improve funding outcomes.
In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.
Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore itâs not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.