It seems to me like quantum randomness can be a source of creating legitimate divergence of outcomes. Lets call this “bifurcation”. I could imagine some utility functions for which increasing the bifurcation of outcomes is beneficial. I have a harder time imagining situations where it’s negative.
I’d expect that interventions that cause more quantum bifurcation generally have other costs. Like, if I add some randomness to a decision, the decision quality is likely to decrease a bit on average.
So there’s a question of the trade-offs of a decrease in the mean outcome, vs. an increase in the amount of bifurcation.
There’s a separate question on if EAs should actually desire bifurcation or not. I’m really unsure here, and would like to see more thinking on this aspect of it.
Separately, I’d note that I’m not sure how important it is if there’s already a ton of decisive quantum randomness happening. Even if there’s a lot of it, we might want even more.
One quick thought; often, when things are very grim, you’re pretty okay taking chances.
Imagine we need 500 units of AI progress in order to save the world. In expectation, we’d expect 100. Increasing our amount to 200 doesn’t help us, all that matters is if we can get over 500. In this case, we might want a lot of bifurcation. We’d much prefer a 1% chance of 501 units, than a 100% chance of 409 units, for example.
In this case, lots of randomness/bifurcation will increase total expected value (which is correlated with our chances of getting over 500 units, moreso than it is correlated with the expected units of progress).
I imagine this mainly works with discontinuities, like the function described above (Utility = 0 for units 0 to 499, and Utility = 1 for units of 500+)
It seems to me like quantum randomness can be a source of creating legitimate divergence of outcomes. Lets call this “bifurcation”. I could imagine some utility functions for which increasing the bifurcation of outcomes is beneficial. I have a harder time imagining situations where it’s negative.
I’d expect that interventions that cause more quantum bifurcation generally have other costs. Like, if I add some randomness to a decision, the decision quality is likely to decrease a bit on average.
So there’s a question of the trade-offs of a decrease in the mean outcome, vs. an increase in the amount of bifurcation.
There’s a separate question on if EAs should actually desire bifurcation or not. I’m really unsure here, and would like to see more thinking on this aspect of it.
Separately, I’d note that I’m not sure how important it is if there’s already a ton of decisive quantum randomness happening. Even if there’s a lot of it, we might want even more.
One quick thought; often, when things are very grim, you’re pretty okay taking chances.
Imagine we need 500 units of AI progress in order to save the world. In expectation, we’d expect 100. Increasing our amount to 200 doesn’t help us, all that matters is if we can get over 500. In this case, we might want a lot of bifurcation. We’d much prefer a 1% chance of 501 units, than a 100% chance of 409 units, for example.
In this case, lots of randomness/bifurcation will increase total expected value (which is correlated with our chances of getting over 500 units, moreso than it is correlated with the expected units of progress).
I imagine this mainly works with discontinuities, like the function described above (Utility = 0 for units 0 to 499, and Utility = 1 for units of 500+)