Hi Max! Again, I agree the longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden’s post, but some additional comments here.
But it would be a pretty strange objection to say that me giving a probability of 60% is meaningless, or that I’m somehow not able or not allowed to enter such bets.
Although “probability of 60%” may be less meaningful than we’d like / expect, you are certainly allowed to enter such bets. In fact, someone willing to take the other side suggests that he/she disagrees. This highlights the difficulty of converging on objective probabilities for future outcomes which aren’t directly subject to domain-specific science (e.g. laws of planetary motion). Closer in time, we might converge reasonably closely on an unambiguous measure, or appropriate parametric statistical model.
Regarding the “60% probability” for future outcomes, a useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes. I quickly become less convinced I’m estimating meaningful risk the further out I go. Further, we only run the future once, so it’s hard to actually confirm our probability is meaningful (as for repeated coin flips). We could make longtermist bets by transferring $ btwn our far-future offspring, but can’t tell who comes out on top “in expectation” beyond simple arbitrages.
This defence is that for any instance of probabilistic reasoning about the future we can simply ignore most possible futures
Honest question being new to EA… is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time? Shouldn’t we calculate Expected Utility over billion year futures for all current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?
For example, the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.
Honest question being new to EA… is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time? Shouldn’t we calculate Expected Utility over billion year futures for all current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?
Yes, I agree that it’s problematic. We “should” do the full calculation if we could, but in fact we can’t because of our limited capacity for computation/thinking.
But note that in principle this situation is familiar. E.g. a CEO might try to maximize the long-run profits of her company, or a member of government might try to design a healthcare policy that maximizes wellbeing. In none of these cases are we able to do the “full calculation”, albeit my a less dramatic margin than for longtermism.
And we don’t think that the CEO’s or the politician’s effort are meaningless or doomed or anything like that. We know that they’ll use heuristics, simplified models, or other computational shortcuts; we might disagree with them which heuristics and models to use, and if repeatedly queried with “why?” both they and we would come to a place where we’d struggle to justify some judgment call or choice of prior or whatever. But that’s life—a familiar situation and one we can’t get out of.
Hi Max! Again, I agree the longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden’s post, but some additional comments here.
Although “probability of 60%” may be less meaningful than we’d like / expect, you are certainly allowed to enter such bets. In fact, someone willing to take the other side suggests that he/she disagrees. This highlights the difficulty of converging on objective probabilities for future outcomes which aren’t directly subject to domain-specific science (e.g. laws of planetary motion). Closer in time, we might converge reasonably closely on an unambiguous measure, or appropriate parametric statistical model.
Regarding the “60% probability” for future outcomes, a useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes. I quickly become less convinced I’m estimating meaningful risk the further out I go. Further, we only run the future once, so it’s hard to actually confirm our probability is meaningful (as for repeated coin flips). We could make longtermist bets by transferring $ btwn our far-future offspring, but can’t tell who comes out on top “in expectation” beyond simple arbitrages.
Honest question being new to EA… is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time? Shouldn’t we calculate Expected Utility over billion year futures for all current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?
For example, the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.
Yes, I agree that it’s problematic. We “should” do the full calculation if we could, but in fact we can’t because of our limited capacity for computation/thinking.
But note that in principle this situation is familiar. E.g. a CEO might try to maximize the long-run profits of her company, or a member of government might try to design a healthcare policy that maximizes wellbeing. In none of these cases are we able to do the “full calculation”, albeit my a less dramatic margin than for longtermism.
And we don’t think that the CEO’s or the politician’s effort are meaningless or doomed or anything like that. We know that they’ll use heuristics, simplified models, or other computational shortcuts; we might disagree with them which heuristics and models to use, and if repeatedly queried with “why?” both they and we would come to a place where we’d struggle to justify some judgment call or choice of prior or whatever. But that’s life—a familiar situation and one we can’t get out of.