This looks interesting, but I’d want to see a formal statement.
Is it the expected value that’s logarithmic, or expected value conditional on nonzero (or sufficiently high) value?
tl;dr: I think under one reasonable interpretation, with logarithmic expected value and precise distributions, the theorem is false. It might be true if made precise in a different way.
If
you only care about expected value,
you had the expected value of each project as a function of resources spent (assuming logarithmic expected returns already assumes a lot, but does leave a lot of room), and
how much you fund one doesn’t affect the distribution of any others (holding their funding constant),
then the uncertainty doesn’t matter (with precise probabilities), only the expected values do. So allocating in proportion to your credence that each project will be best depends on something that doesn’t actually matter that much, i.e. your credence that the project will be best, because you can hold the expected values for a project constant while adjusting the probability that it’s best.
To be more concrete, we could make all of the projects statistically independent and either return 0 or some high value with some tiny probability, and the value or probability of positive return scales with the amount of resources spent on the project, so that the expected values scale logarithmically. Let’s also assume only two projects (or a number that scales sufficiently slowly with the inverse probability of any one of them succeeding). Then, conditional on nonzero impact, your impact will with probability very close to 1 come from whichever project you fund succeeds, since it’ll be very unlikely that multiple will.
So, I think we’ve satisfied the stated conditions of the theorem, and it recommends allocating in proportion to our credences in each project being best, which, with very low independent probabilities of success across projects, is roughly the credence that the project succeeds at all. But we could have projects with the same expected value (at each funding level, increasing logarithmically with resources spent) with one 10x more likely to succeed than the rest combined, so the theorem claims the optimal allocation is to put ~91% into the most likely to succeed project, but the expected value-maximizing allocation is to give the same amount to each project.
I think the even allocation would still be optimal if the common expected value function of resources spent on each project was non-decreasing and (weakly) concave, and if it’s strictly increasing and strictly concave (like the logarithm), then the even allocation is also the only maximizer.
This looks interesting, but I’d want to see a formal statement.
Is it the expected value that’s logarithmic, or expected value conditional on nonzero (or sufficiently high) value?
tl;dr: I think under one reasonable interpretation, with logarithmic expected value and precise distributions, the theorem is false. It might be true if made precise in a different way.
If
you only care about expected value,
you had the expected value of each project as a function of resources spent (assuming logarithmic expected returns already assumes a lot, but does leave a lot of room), and
how much you fund one doesn’t affect the distribution of any others (holding their funding constant),
then the uncertainty doesn’t matter (with precise probabilities), only the expected values do. So allocating in proportion to your credence that each project will be best depends on something that doesn’t actually matter that much, i.e. your credence that the project will be best, because you can hold the expected values for a project constant while adjusting the probability that it’s best.
To be more concrete, we could make all of the projects statistically independent and either return 0 or some high value with some tiny probability, and the value or probability of positive return scales with the amount of resources spent on the project, so that the expected values scale logarithmically. Let’s also assume only two projects (or a number that scales sufficiently slowly with the inverse probability of any one of them succeeding). Then, conditional on nonzero impact, your impact will with probability very close to 1 come from whichever project you fund succeeds, since it’ll be very unlikely that multiple will.
So, I think we’ve satisfied the stated conditions of the theorem, and it recommends allocating in proportion to our credences in each project being best, which, with very low independent probabilities of success across projects, is roughly the credence that the project succeeds at all. But we could have projects with the same expected value (at each funding level, increasing logarithmically with resources spent) with one 10x more likely to succeed than the rest combined, so the theorem claims the optimal allocation is to put ~91% into the most likely to succeed project, but the expected value-maximizing allocation is to give the same amount to each project.
I think the even allocation would still be optimal if the common expected value function of resources spent on each project was non-decreasing and (weakly) concave, and if it’s strictly increasing and strictly concave (like the logarithm), then the even allocation is also the only maximizer.