I think on crux here is around what to do in this face of uncertainty.
You say:
If you put a less unreasonable (from my perspective) number like 50% that weâll have AGI in 30 years, and 50% we wonât, then again I think your vibes and mood are incongruent with that. Like, if I think itâs 50-50 whether there will be a full-blown alien invasion in my lifetime, then I would not describe myself as an âalien invasion risk skepticâ, right?
But I think sceptics like titotal arenât anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they arenât saying âIâve looked at the evidence for AI Progress and am confident at putting it at less than 1%â or whatever, theyâre saying something more like âIâve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/âhype-based to me that Iâm not even giving it table stakesâ
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense youâd want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice thatâs not what people do. I think titotalâs experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the âtransformative near-term llm-based agiâ hypothesis to ânot a reasonable hypothesisâ
To them I feel itâs less someone asking âdonât put the space heater next to the curtains because it might cause a fireâ and more âdonât keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house downâ. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/âconcentration of power than loss-of-control to autonomous systems)
I think on crux here is around what to do in this face of uncertainty.
You say:
But I think sceptics like titotal arenât anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they arenât saying âIâve looked at the evidence for AI Progress and am confident at putting it at less than 1%â or whatever, theyâre saying something more like âIâve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/âhype-based to me that Iâm not even giving it table stakesâ
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense youâd want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice thatâs not what people do. I think titotalâs experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the âtransformative near-term llm-based agiâ hypothesis to ânot a reasonable hypothesisâ
To them I feel itâs less someone asking âdonât put the space heater next to the curtains because it might cause a fireâ and more âdonât keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house downâ. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/âconcentration of power than loss-of-control to autonomous systems)