I think Luu’s article is good, and worth reading for people involved in thinking about these issues. A point raised in the appendix in the linked post (not here):
More generally, the whole methodology is backwards — if you have deep knowledge of a topic, then it can be valuable to put a number down to convey the certainty of your knowledge to other people, and if you don’t have deep knowledge but are trying to understand an area, then it can be valuable to state your uncertainties so that you know when you’re just guessing. But here, we have a fairly confidently stated estimate (nostalgebraist notes that Karnofsky says “Bio Anchors estimates a >10% chance of transformative AI by 2036, a ~50% chance by 2055, and an ~80% chance by 2100.”) that’s based off of a model that’s nonsense that relies on a variable that’s picked out of thin air. Naming a high probability after the fact and then naming a lower number and saying that’s conservative when it’s based on this kind of modeling is just window dressing.
I think the article Luu is discussing didn’t have a very credible approach to partitioning uncertainty over the possibilities in question, which might be what Luu is talking about here.
However, if Luu really means it when he says that the point of writing down probabilities when you don’t have deep topic knowledge is “so you know when you’re just guessing”, then I disagree. My view is (I think) somewhat like ET Jaynes: maximising entropy really does help you partition your uncertainty. Furthermore, if the outcome of entropy maximisation is not credible it may be because you failed to include some things you already knew.
For example, Tetlock compares political forecasts to “chance”, but “chance” is really just another name for the “maximise entropy unconditionally” strategy. Another way we could state his finding is that, over 10+ year timescales, this strategy is hard to beat.
I’ve seen examples where maximising entropy can lead to high probabilities of crazy-seeming things. However, I wonder how often this outcome results from failing to account for some prior knowledge that we do have but haven’t necessarily articulated yet.
I think Luu’s article is good, and worth reading for people involved in thinking about these issues. A point raised in the appendix in the linked post (not here):
I think the article Luu is discussing didn’t have a very credible approach to partitioning uncertainty over the possibilities in question, which might be what Luu is talking about here.
However, if Luu really means it when he says that the point of writing down probabilities when you don’t have deep topic knowledge is “so you know when you’re just guessing”, then I disagree. My view is (I think) somewhat like ET Jaynes: maximising entropy really does help you partition your uncertainty. Furthermore, if the outcome of entropy maximisation is not credible it may be because you failed to include some things you already knew.
For example, Tetlock compares political forecasts to “chance”, but “chance” is really just another name for the “maximise entropy unconditionally” strategy. Another way we could state his finding is that, over 10+ year timescales, this strategy is hard to beat.
I’ve seen examples where maximising entropy can lead to high probabilities of crazy-seeming things. However, I wonder how often this outcome results from failing to account for some prior knowledge that we do have but haven’t necessarily articulated yet.