The issue is that both sides of the debate lack gears-level arguments. The ones you give in this post (like “all the doom flows through the tiniest crack in our defence”) are more like vague intuitions; equally, on the other side, there are vague intuitions like “AGIs will be helping us on a lot of tasks” and “collusion is hard” and “people will get more scared over time” and so on.
I’d say it’s more than a vague intuition. It follows from alignment/control/misuse/coordination not being (close to) solved and ASI being much more powerful than humanity. I think it should be possible to formalise it, even. “AGIs will be helping us on a lot of tasks”, “collusion is hard” and “people will get more scared over time” aren’t anywhere close to overcoming it imo.
It follows from alignment/control/misuse/coordination not being (close to) solved.
“AGIs will be helping us on a lot of tasks”, “collusion is hard” and “people will get more scared over time” aren’t anywhere close to overcoming it imo.
These are what I mean by the vague intuitions.
I think it should be possible to formalise it, even
Nobody has come anywhere near doing this satisfactorily. The most obvious explanation is that they can’t.
To be fair, I think I’m partly making wrong assumptions about what exactly you’re arguing for here.
On a slightly closer read, you don’t actually argue in this piece that it’s as high as 90% - I assumed that because I think you’ve argued for that previously, and I think that’s what “high” p(doom) normally means.
I’m crying out for convincing gears-level arguments against (even have $1000 bounty on it), please provide some.
The issue is that both sides of the debate lack gears-level arguments. The ones you give in this post (like “all the doom flows through the tiniest crack in our defence”) are more like vague intuitions; equally, on the other side, there are vague intuitions like “AGIs will be helping us on a lot of tasks” and “collusion is hard” and “people will get more scared over time” and so on.
I’d say it’s more than a vague intuition. It follows from alignment/control/misuse/coordination not being (close to) solved and ASI being much more powerful than humanity. I think it should be possible to formalise it, even. “AGIs will be helping us on a lot of tasks”, “collusion is hard” and “people will get more scared over time” aren’t anywhere close to overcoming it imo.
These are what I mean by the vague intuitions.
Nobody has come anywhere near doing this satisfactorily. The most obvious explanation is that they can’t.
To be fair, I think I’m partly making wrong assumptions about what exactly you’re arguing for here.
On a slightly closer read, you don’t actually argue in this piece that it’s as high as 90% - I assumed that because I think you’ve argued for that previously, and I think that’s what “high” p(doom) normally means.
I do think it is basically ~90%, but I’m arguing here for doom being the default outcome of AGI; I think “high” can reasonably be interpreted as >50%.