Hi Max_Daniel! I’m sympathetic to both your and Vaden’s arguments, so I may try to bridge the gap on climate change vs. your Christmas party vs. longtermism.
Climate change is a problem now, and we have past data to support projecting already-observed effects into the future. So statements of the sort “if current data projected forward with no notable intervention, the Earth would be uninhabitable in x years.” This statement is reliant on some assumptions about future data vs. past data, but we can be reasonably clear about them and debate them.
Future knowledge will undoubtedly help things and reframe certain problems, but a key point is that we know where to start gathering data on some of the aspects you raise: “how ppl will adapt”, “how can we develop renewable energy or batteries”, etc, because climate change is already a well defined problem. We have current knowledge that will help us get off the ground.
I agree the measure theoretic arguments may prove too much, but the number of people at your Christmas party is an unambiguously posed question for which you have data on how many people you invited, how flaky your friends are, etc.
In both cases, you may use probabilistic predictions, based on a set of assumptions, to compel others to act on climate change or compel yourself to invite more people.
the key question is whether longtermism requires the kind of predictions that aren’t feasible
At the risk of oversimplification by using AI Safety example as a representative longtermist argument, the key difference is that we haven’t created or observed human-level AI, or even those which can adaptively set their own goals.
There are meaningful arguments we can use to compel others to discuss issues of safety (in algorithm development, government regulation, etc). After all, it will be a human process to develop and deploy these AI, and we can set guardrails by focused discussion today.
Vaden’s point seems to be that arguments that rely on expected values or probabilities are of significantly less value in this case. We are not operating in a well-defined problem, with already-available or easily -collectable data, because we haven’t even created the AI.
This seems to be the key point about “predicting future knowledge” being fundamentally infeasible (just as people in 1900 couldn’t meaningfully reason about the internet, let alone make expected utility calculations). Again, we’re not as ignorant as ppl in 1900 and may have a sense this problem is important, but can we actually make concrete progress with respect to killer robots today?
Everyone on this forum may have their own assumptions about the future AI, or climate change for that matter. We may not ever be able to align our priors and sufficiently agree on the future, but for the purposes of planning and allocating resources, the discussion around climate change seems significantly more grounded.
Hi Max! Again, I agree the longtermist and garden-variety cases may not actually differ regarding the measure-theoretic features in Vaden’s post, but some additional comments here.
Although “probability of 60%” may be less meaningful than we’d like / expect, you are certainly allowed to enter such bets. In fact, someone willing to take the other side suggests that he/she disagrees. This highlights the difficulty of converging on objective probabilities for future outcomes which aren’t directly subject to domain-specific science (e.g. laws of planetary motion). Closer in time, we might converge reasonably closely on an unambiguous measure, or appropriate parametric statistical model.
Regarding the “60% probability” for future outcomes, a useful thought experiment for me was how I might reason about the risk profile of bets made on open-ended future outcomes. I quickly become less convinced I’m estimating meaningful risk the further out I go. Further, we only run the future once, so it’s hard to actually confirm our probability is meaningful (as for repeated coin flips). We could make longtermist bets by transferring $ btwn our far-future offspring, but can’t tell who comes out on top “in expectation” beyond simple arbitrages.
Honest question being new to EA… is it not problematic to restrict our attention to possible futures or aspects of futures which are relevant to a single issue at a time? Shouldn’t we calculate Expected Utility over billion year futures for all current interventions, and set our relative propensity for actions = exp{α * EU } / normalizer ?
For example, the downstream effects of donating to Anti-Malaria would be difficult to reason about, but we are clueless as to whether its EU would be dwarfed by AI safety on the billion yr timescale, e.g. bringing the entire world out of poverty limiting political risk leading to totalitarian government.