I think on crux here is around what to do in this face of uncertainty.
You say:
If you put a less unreasonable (from my perspective) number like 50% that we’ll have AGI in 30 years, and 50% we won’t, then again I think your vibes and mood are incongruent with that. Like, if I think it’s 50-50 whether there will be a full-blown alien invasion in my lifetime, then I would not describe myself as an “alien invasion risk skeptic”, right?
But I think sceptics like titotal aren’t anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they aren’t saying “I’ve looked at the evidence for AI Progress and am confident at putting it at less than 1%” or whatever, they’re saying something more like “I’ve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/hype-based to me that I’m not even giving it table stakes”
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense you’d want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice that’s not what people do. I think titotal’s experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the “transformative near-term llm-based agi” hypothesis to ‘not a reasonable hypothesis’
To them I feel it’s less someone asking “don’t put the space heater next to the curtains because it might cause a fire” and more “don’t keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house down”. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/concentration of power than loss-of-control to autonomous systems)
I think on crux here is around what to do in this face of uncertainty.
You say:
But I think sceptics like titotal aren’t anywhere near 5% - in fact they deliberately do not have a number. And when they have low credences in the likelihood of rapid, near-term, transformative AI progress, they aren’t saying “I’ve looked at the evidence for AI Progress and am confident at putting it at less than 1%” or whatever, they’re saying something more like “I’ve look at the arguments for rapid, transformative AI Progress and it seems so unfounded/hype-based to me that I’m not even giving it table stakes”
I think this is a much more realistic form of bounded-rationality. Sure, in some perfect Bayesian sense you’d want to assign every hypothesis a probability and make sure they all sum to 1 etc etc. But in practice that’s not what people do. I think titotal’s experience (though obviously this is my interpretation, get it from the source!) is that they seem a bunch of wild claims X, they do a spot check on their field of material science and come away so unimpressed that they relegate the “transformative near-term llm-based agi” hypothesis to ‘not a reasonable hypothesis’
To them I feel it’s less someone asking “don’t put the space heater next to the curtains because it might cause a fire” and more “don’t keep the space heater in the house because it might summon the fire demon Asmodeus who will burn the house down”. To titotal and other sceptics, they believe the evidence presented is not commensurate with the claims made.
(For reference, while previously also sceptical I actually have become a lot more concerned about transformative AI over the last year based on some of the results, but that is from a much lower baseline, and my risks are more based around politics/concentration of power than loss-of-control to autonomous systems)