Yes, I share that desire for more comparability in predictions and more breakdowns of whatās informing oneās predictions. Though Iād also highlight that the predictions are often not even very clear in what theyāre about, let alone very comparable or clear in whatās informing them. So clarity in whatās being predicted might be the issue Iād target first. (Or one could target multiple issues at once.)
And your comments on and graph of the 2008 GCR conference results are interesting. Are the units on the y axis percentage points? E.g., is that indicating something like a 2.5% chance superintelligent AI kills 1 or fewer people? I feel like I wouldnāt want to extrapolate that from the predictions the GCR researchers made (which didnāt include predictions for 1 or fewer deaths); Iād guess theyād put more probability mass on 0.
The probability of an interval is the area under the graph! Currently, itās set to 0% that any of the disaster scenarios kill < 1 people. I agree this is probably incorrect, but I didnāt want to make any other assumptions about points they didnāt specify. Hereās a version that explicitly states that.
The probability of an interval is the area under the graph!
Itās not obvious to me how to interpret this without specifying the units on the y axis (percentage points?), and when the x axis is logarithmic and in units of numbers of deaths. E.g., for the probability of superintelligent AI killing between 1 and 10 people, should I multiply ~2.5 (height along x axis) by ~10 (length along y axis) and get 25%? But then Iāll often be multiplying the height along the x axis by more than 100 and getting insane probabilities?
So at the moment I can make sense of which events are seen as more likely than other ones, but not the absolute likelihood theyāre assigned.
I may be making some basic mistake. Also feel free to point me to a pre-written guide to interpreting Elicit graphs.
Thanks for your comment!
Yes, I share that desire for more comparability in predictions and more breakdowns of whatās informing oneās predictions. Though Iād also highlight that the predictions are often not even very clear in what theyāre about, let alone very comparable or clear in whatās informing them. So clarity in whatās being predicted might be the issue Iād target first. (Or one could target multiple issues at once.)
And your comments on and graph of the 2008 GCR conference results are interesting. Are the units on the y axis percentage points? E.g., is that indicating something like a 2.5% chance superintelligent AI kills 1 or fewer people? I feel like I wouldnāt want to extrapolate that from the predictions the GCR researchers made (which didnāt include predictions for 1 or fewer deaths); Iād guess theyād put more probability mass on 0.
The probability of an interval is the area under the graph! Currently, itās set to 0% that any of the disaster scenarios kill < 1 people. I agree this is probably incorrect, but I didnāt want to make any other assumptions about points they didnāt specify. Hereās a version that explicitly states that.
Itās not obvious to me how to interpret this without specifying the units on the y axis (percentage points?), and when the x axis is logarithmic and in units of numbers of deaths. E.g., for the probability of superintelligent AI killing between 1 and 10 people, should I multiply ~2.5 (height along x axis) by ~10 (length along y axis) and get 25%? But then Iāll often be multiplying the height along the x axis by more than 100 and getting insane probabilities?
So at the moment I can make sense of which events are seen as more likely than other ones, but not the absolute likelihood theyāre assigned.
I may be making some basic mistake. Also feel free to point me to a pre-written guide to interpreting Elicit graphs.