I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.
I ask the same question I asked of OP: give me some guidance that applies for estimating the impact of maximizing actions that doesn’t apply for estimating the impact of randomly selected actions.
This is a good point. Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I don’t know how much you should adjust these. Maybe there’s a way to model this explicitly, but it seems like you’d be trying to model your selection process itself before you’ve defined it, and then you look for a selection process which satisfies some properties.
You might also want to spend more effort looking for arguments and evidence against each option the more options you’re considering.
When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficer’s curse if you’re using cutoffs).
What do you mean by “the priors”?
If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good.
That’s the basic idea given by Muelhauser. Corrected posterior EV estimates.
You might also want to spend more effort looking for arguments and evidence against each option the more options you’re considering.
As opposed to equal effort for and against? OK, I’m satisfied. However, if I’ve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.
As I recall, that post found that randomized funding doesn’t make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.
or spread funding further
I don’t see how that would improve funding outcomes.
If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.
Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore it’s not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.
I made a long top-level comment that I hope will clarify some problems with the solution proposed in the original paper.
This is a good point. Somehow, I think you’d want to adjust your posterior downward based on the set or the number of options under consideration and how unlikely the data that makes the intervention look good. This is not really useful, since I don’t know how much you should adjust these. Maybe there’s a way to model this explicitly, but it seems like you’d be trying to model your selection process itself before you’ve defined it, and then you look for a selection process which satisfies some properties.
You might also want to spend more effort looking for arguments and evidence against each option the more options you’re considering.
When considering a larger number of options, you could use some randomness in your selection process or spread funding further (although the latter will be vulnerable to the satisficer’s curse if you’re using cutoffs).
If I haven’t decided on a prior, and multiple different priors (even an infinite set of them) seem equally reasonable to me.
That’s the basic idea given by Muelhauser. Corrected posterior EV estimates.
As opposed to equal effort for and against? OK, I’m satisfied. However, if I’ve done the corrected posterior EV estimation, and then my specific search for arguments-against turns up short, then I should increase my EV estimates back towards the original naive estimate.
As I recall, that post found that randomized funding doesn’t make sense. Which 100% matches my presumptions, I do not see how it could improve funding outcomes.
I don’t see how that would improve funding outcomes.
In Bayesian rationality, you always have a prior. You seem to be considering or defining things differently.
Here we would probably say that your actual prior exists and is simply some kind of aggregate of these possible priors, therefore it’s not the case that we should leap outside our own priors in some sort of violation of standard Bayesian rationality.