(Edited to add: I agree that no one has a perfectly coherent model of the world because we’re all flawed humans, but that doesn’t mean we don’t have any coherence or prediction ability)
My kind of quick off-the-cuff theory of good forecasting is that you’re probably running something like a good Monte Carlo algorithm in your head as you simulate outcomes. That’s great if you’re willing to assign all events positive probability (a good idea when forecasting something like an election). But that assumption begs the question against Don’t Prevent Impossible Harms. And, as I note in the article, getting very high precision can still be computationally expensive.
Sorry, I had just read your comment and not the post previously. I’ve now read the section “My Argument in Ordinary Language” and skimmed a few other portions; I don’t think I would be able to understand the technical details very quickly.
My new question is (sorry if I missed the answer to this somewhere): Why can’t I just say that all possible events have positive probability, and our task is to figure out which ones are higher and worth paying attention to and which ones are very very low (and such not worth worrying about)? Isn’t the idea that we should have nonzero probability on any event occurring a core tenet of Bayesian epistemology? Do you disagree with Bayesian epistemology, or am I missing something (totally possible)?
I guess the worry then is that you’re drawn into fanaticism: in principle, any positive probability event, however small that probability is, can be bad enough to justify taking extremely costly measures now to ameliorate it.
I’d also say that assigning all events positive probability can’t be a part of bayesianism in general if we want to allow for a continuum of possible events (e.g., as many possible events as there are real numbers).
I do think the best way out for the position I’m arguing against is something like: assume all events have positive probability, set an upper bound on the badness of events and the costliness of ameliorating them (to avoid fanaticism) and then hope you can run simulations that give you a tight margin for error with low failure probability.
Ah yeah, I think I’m probably more sympathetic to (the maybe unfortunately named) fanaticism than you are, see e.g. In Defense of Fanaticism. Honestly the thing that makes me worry about it the most by far is infinite ethics.
I’d also say that assigning all events positive probability can’t be a part of bayesianism in general if we want to allow for a continuum of possible events (e.g., as many possible events as there are real numbers).
Yeah I’m confused about how to think about this; I’d be interested to hear from an expert on this topic on what the Bayesian view here is. I did some searching but couldn’t find anything in the literature within a few minutes.
My intuition says that it’s fine as long as the number of possible events isn’t a “bigger infinity” than real numbers in the same way uncountable infinities are larger than countable infinities? But not sure.
So I’m actually fine with fanaticism in principle if we allow some events to have probability zero. But if every event in our possibility space has positive probability, then I worry that you’ll just throw ever-more resources at preventing ever-lower probability catastrophes.
On probability zero events and Bayesianism in the case where the sample space is a continuum, Easwaran is a great source (this is long but worth it, sec. 1.3.3 and sec. 2 are the key parts): https://philpapers.org/archive/EASCP.pdf
So I’m actually fine with fanaticism in principle if we allow some events to have probability zero. But if every event in our possibility space has positive probability, then I worry that you’ll just throw ever-more resources at preventing ever-lower probability catastrophes.
I don’t see why this is an issue. It seems like a good thing to prevent catastrophes as long as it’s more cost-effective to do so than non-catastrophe-preventing interventions. If the catastrophe is low enough probability then we should pursue other interventions instead.
On probability zero events and Bayesianism in the case where the sample space is a continuum, Easwaran is a great source (this is long but worth it, sec. 1.3.3 and sec. 2 are the key parts): https://philpapers.org/archive/EASCP.pdf
Thanks for linking. I read through Section 1.3.3 and thought it was interesting.
I thought of an argument that you might be wrong about the disanalogy you claim between longtermist forecasting and election forecasting that you mention above. You write in previous comments:
That’s great if you’re willing to assign all events positive probability (a good idea when forecasting something like an election). But that assumption begs the question against Don’t Prevent Impossible Harms.
and
I’d also say that assigning all events positive probability can’t be a part of bayesianism in general if we want to allow for a continuum of possible events (e.g., as many possible events as there are real numbers).
Let’s consider the election case. At a high level of abstraction, either Candidate A or Candidate B will win. We should obviously assign positive probability to both A and B winning. But with a more fine-grained view, I could say there’s an infinite continuum of possible outcomes that the state of the world will be in: e.g. where Candidate A is standing at 10 PM on election night might have an infinite number of possibilities with enough precision. The key is that we’re collapsing an infinite amount of possible worlds into a finite amount of possible worlds, similar to how the probability of a random point on a circle being in any exact spot is 0, but the chance of it ending up on the right half is 0.5.
I claim that the same thing is happening with catastrophic risk forecasting (a representative instance of longtermist forecasting). Let’s take AI risk: there are 2 states the world could be in 2100, one without AI having caused extinction and one with AI having caused extinction. I claim that similarly to the election forecasting examples, it would be absurd not to assign nonzero probabilities to both states. Similar to the election forecasting example, this is collapsing infinitely many states of the world into just two categories, and this should obviously lead to nonzero credence in both categories.
My guess is that the key error you’re making in your argument is that you’re considering very narrow events e, while longtermists actually care about classes of many events which obviously in aggregate demand positive probability. In this respect forecasting catastrophic risks is the same as forecasting elections (though obviously there are other differences, such as the methods we use to come up with probabilities)!
I think the contrast with elections is an important and interesting one. I’ll start by saying that being able to coarse-grain the set of all possible worlds into two possibilities doesn’t mean we should assign both possibilities positive probability. Consider the set of all possible sequences of infinite coin tosses. We can coarse-grain those sequences into two sets: the ones where finitely many coins land heads, and the ones where infinitely many coins lands heads. But, assuming we’re actually going to toss infinitely many coins, and assuming each coin is fair, the first set of sequences has probability zero and the second set has probability one.
In the election case, we have a good understanding of the mechanism by which elections are (hopefully) won. In this simple case with a plurality rule, we just want to know which candidate will get the most votes. So we can define probability distributions over the possible number of votes cast, and probability distributions over possible distributions of those votes to different candidates (where vote distributions are likely conditional on overall turnout), and coarse-grain those various vote distributions into the possibility of each candidate winning. This is a simple case, and no doubt real-world election models have many more parameters, but my point is that we understand the relevant possibility space and how it relates to our outcomes of interest fairly well. I don’t think we have anything like this understanding in the AGI case.
Great, I think we’ve gotten to the crux. I agree we have much worse understanding in the AGI case but I think we easily have enough understanding to assign positive probabilities, and likely substantial ones. I agree more detailed models are ideal but in some cases they’re impractical and you have to do the best you’ve got with the evidence you have. Also, this is a matter of degree and not binary, and I think often people take explicit models too literally/seriously and don’t account enough for model uncertainty e.g. putting too much faith in oversimplified economic models, underestimating how much explicit climate models might be missing out on tail risks or unknown unknowns.
I’d be extremely curious to get your take on why AGI forecasting is so different from the long-term speculative forecasts in the piece Nuno linked above, of which many turned out to be true.
I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.
What are your thoughts on forecasting techniques for open-ended/subjective questions that have demonstrably good track records?
See e.g. https://www.cold-takes.com/prediction-track-records-i-know-of/ (or my personal track record)
(Edited to add: I agree that no one has a perfectly coherent model of the world because we’re all flawed humans, but that doesn’t mean we don’t have any coherence or prediction ability)
My kind of quick off-the-cuff theory of good forecasting is that you’re probably running something like a good Monte Carlo algorithm in your head as you simulate outcomes. That’s great if you’re willing to assign all events positive probability (a good idea when forecasting something like an election). But that assumption begs the question against Don’t Prevent Impossible Harms. And, as I note in the article, getting very high precision can still be computationally expensive.
Sorry, I had just read your comment and not the post previously. I’ve now read the section “My Argument in Ordinary Language” and skimmed a few other portions; I don’t think I would be able to understand the technical details very quickly.
My new question is (sorry if I missed the answer to this somewhere): Why can’t I just say that all possible events have positive probability, and our task is to figure out which ones are higher and worth paying attention to and which ones are very very low (and such not worth worrying about)? Isn’t the idea that we should have nonzero probability on any event occurring a core tenet of Bayesian epistemology? Do you disagree with Bayesian epistemology, or am I missing something (totally possible)?
I guess the worry then is that you’re drawn into fanaticism: in principle, any positive probability event, however small that probability is, can be bad enough to justify taking extremely costly measures now to ameliorate it.
I’d also say that assigning all events positive probability can’t be a part of bayesianism in general if we want to allow for a continuum of possible events (e.g., as many possible events as there are real numbers).
I do think the best way out for the position I’m arguing against is something like: assume all events have positive probability, set an upper bound on the badness of events and the costliness of ameliorating them (to avoid fanaticism) and then hope you can run simulations that give you a tight margin for error with low failure probability.
Ah yeah, I think I’m probably more sympathetic to (the maybe unfortunately named) fanaticism than you are, see e.g. In Defense of Fanaticism. Honestly the thing that makes me worry about it the most by far is infinite ethics.
Yeah I’m confused about how to think about this; I’d be interested to hear from an expert on this topic on what the Bayesian view here is. I did some searching but couldn’t find anything in the literature within a few minutes.
My intuition says that it’s fine as long as the number of possible events isn’t a “bigger infinity” than real numbers in the same way uncountable infinities are larger than countable infinities? But not sure.
So I’m actually fine with fanaticism in principle if we allow some events to have probability zero. But if every event in our possibility space has positive probability, then I worry that you’ll just throw ever-more resources at preventing ever-lower probability catastrophes.
On probability zero events and Bayesianism in the case where the sample space is a continuum, Easwaran is a great source (this is long but worth it, sec. 1.3.3 and sec. 2 are the key parts): https://philpapers.org/archive/EASCP.pdf
On a way to defuse the fanaticism problem, I’ve actually written a post on it, showing why a noise floor is the most useful way to solve the problem.
Here’s the post, called EV Maximization for Humans:
https://forum.effectivealtruism.org/posts/qSnjYwsAFeQv2nGnX/ev-maximization-for-humans
I don’t see why this is an issue. It seems like a good thing to prevent catastrophes as long as it’s more cost-effective to do so than non-catastrophe-preventing interventions. If the catastrophe is low enough probability then we should pursue other interventions instead.
Thanks for linking. I read through Section 1.3.3 and thought it was interesting.
I thought of an argument that you might be wrong about the disanalogy you claim between longtermist forecasting and election forecasting that you mention above. You write in previous comments:
and
Let’s consider the election case. At a high level of abstraction, either Candidate A or Candidate B will win. We should obviously assign positive probability to both A and B winning. But with a more fine-grained view, I could say there’s an infinite continuum of possible outcomes that the state of the world will be in: e.g. where Candidate A is standing at 10 PM on election night might have an infinite number of possibilities with enough precision. The key is that we’re collapsing an infinite amount of possible worlds into a finite amount of possible worlds, similar to how the probability of a random point on a circle being in any exact spot is 0, but the chance of it ending up on the right half is 0.5.
I claim that the same thing is happening with catastrophic risk forecasting (a representative instance of longtermist forecasting). Let’s take AI risk: there are 2 states the world could be in 2100, one without AI having caused extinction and one with AI having caused extinction. I claim that similarly to the election forecasting examples, it would be absurd not to assign nonzero probabilities to both states. Similar to the election forecasting example, this is collapsing infinitely many states of the world into just two categories, and this should obviously lead to nonzero credence in both categories.
My guess is that the key error you’re making in your argument is that you’re considering very narrow events e, while longtermists actually care about classes of many events which obviously in aggregate demand positive probability. In this respect forecasting catastrophic risks is the same as forecasting elections (though obviously there are other differences, such as the methods we use to come up with probabilities)!
Let me know if I’m missing something here.
I think the contrast with elections is an important and interesting one. I’ll start by saying that being able to coarse-grain the set of all possible worlds into two possibilities doesn’t mean we should assign both possibilities positive probability. Consider the set of all possible sequences of infinite coin tosses. We can coarse-grain those sequences into two sets: the ones where finitely many coins land heads, and the ones where infinitely many coins lands heads. But, assuming we’re actually going to toss infinitely many coins, and assuming each coin is fair, the first set of sequences has probability zero and the second set has probability one.
In the election case, we have a good understanding of the mechanism by which elections are (hopefully) won. In this simple case with a plurality rule, we just want to know which candidate will get the most votes. So we can define probability distributions over the possible number of votes cast, and probability distributions over possible distributions of those votes to different candidates (where vote distributions are likely conditional on overall turnout), and coarse-grain those various vote distributions into the possibility of each candidate winning. This is a simple case, and no doubt real-world election models have many more parameters, but my point is that we understand the relevant possibility space and how it relates to our outcomes of interest fairly well. I don’t think we have anything like this understanding in the AGI case.
Great, I think we’ve gotten to the crux. I agree we have much worse understanding in the AGI case but I think we easily have enough understanding to assign positive probabilities, and likely substantial ones. I agree more detailed models are ideal but in some cases they’re impractical and you have to do the best you’ve got with the evidence you have. Also, this is a matter of degree and not binary, and I think often people take explicit models too literally/seriously and don’t account enough for model uncertainty e.g. putting too much faith in oversimplified economic models, underestimating how much explicit climate models might be missing out on tail risks or unknown unknowns.
I’d be extremely curious to get your take on why AGI forecasting is so different from the long-term speculative forecasts in the piece Nuno linked above, of which many turned out to be true.
I don’t have a fully-formed opinion here, but for now I’ll just note that the task that the examined futurists are implicitly given is very different from assigning a probability distribution to a variable based on parameters. Rather, the implicit task is to say some stuff that you think will happen. Then we’re judging whether those things happen. But I’m not sure how to translate the output from the task into action. (E.g., Asimov says X will happen, and so we should do Y.)
Agree that these are different; I think they aren’t different enough to come anywhere close to meaning that longtermism can’t be action-guiding though!
Would love to hear more from you when you’ve had a chance to form more of an opinion :)
Edit: also, it seems like one could mostly refute this objection by just finding times when someone did something with the intention of affecting the future in 10-20 years (which many people give some weight to for AGI timelines), and the action had the intended effect? This seems trivial.