Thanks for the long comment, this gives me a much richer picture of how people might be thinking about this. On the first two bullets:
You say you arenât anchoring, in a world where we defaulted to expressing probability in 1/â10^6 units called Ms Iâm just left feeling like you would write âyou should be hesitant to assign 999,999M+ probabilities without a good argument. The burden of proof gets stronger and stronger as you move closer to 1, and 1,000,000 is getting to be a big number.â. So if itâs not anchoring, what calculation or intuition is leading you to specifically 99% (or at least, something in that ballpark), and would similarly lead you to roughly 990,000M with the alternate language?
My reply to Max and your first bullet both give examples of cases in the natural world where probabilities of real future events would go way outside the 0.01% â 99.99% range. Conjunctions force you to have extreme confidence somewhere, the only question is where. If I try to steelman your claim, I think I end up with an idea that we should apply our extreme confidence to the thing inside the product due to correlated cause, rather than the thing outside; does that sound fair?
The rest I see as an attempt to justify the extreme confidences inside the product, and Iâll have to think about more. The following are gut responses:
Iâm not sure which step of this you get off the boat for
Iâm much more baseline cynical than you seem to be about peopleâs willingness and ability to actually try, and try consistently, over a huge time period. To give some idea, Iâd probably have assigned <50% probability to humanity surviving to the year 2150, and <10% for the year 3000, before I came across EA. Whether thatâs correct or not, I donât think its wildly unusual among people who take climate change seriously*, and yet we almost certainly arenât doing enough to combat that as a society. This gives me little hope for dealing with <10% threats that will surely appear over the centuries, and as a result I found and continue to find the seemingly-baseline optimism of longtermist EA very jarring.
(Again, the above is a gut response as opposed to a reasoned claim.)
Applying the rule of thumb for estimating lifetimes to âthe human speciesâ rather than âintelligent lifeâ seems like itâs doing a huge amount of work.
Yeah, Owen made a similar point, and actually I was using civilisation rather than âthe human speciesâ, which is 20x shorter still. I honestly hadnât thought about intelligent life as a possible class before, and that probably is the thing from this conversation that has the most chance of changing how I think about this.
*âThe survey from the Yale Program on Climate Change Communication found that 39 percent think the odds of global warming ending the human race are at least 50 percent. â
The âany decent shotâ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say âprobably stopâ.
Thereâs a motte/âbailey thing going on here, where the motte is something like âAI safety researchers probably do a lot more good than doctorsâ and the bailey is âall doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because thatâs how bad being a doctor isâ.
I donât think we are making the world a better place by doing the second; where possible we should stick to âprobablyâ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often donât do in person.