Lauro—nice post; I especially appreciated your points about default-success versus default-failure mindsets.
Importantly, I think these defaults apply not just to (1) likelihood of being able to develop AGI and (2) likelihood of AGI imposing doom, but also to (3) likelihood that international regulations/pauses/moratoriums could succeed, and (4) likelihood that an anti-AI moral backlash could succeed, and lots of other related issues.
For example, some folks seem to think there’s a very strong default-failure outcome of trying to coordinate formal global regulation of AI, but the same folks (e.g. me!) may think there’s a fairly strong default-success outcome of trying to promote informal global moral stigmatization of AI.
Of course in each such case, what counts as ‘success’ or failure’ may depend heavily on one’s goals. For transhumanist Singularity enthusiasts who actually want humans to be replaced by AIs, a high ‘default-fail’ rate for AI alignment might be seen as actually a success; for libertarians who want every private citizen to have their own unregulated, unaligned AI, then a default-fail for global AI regulation would be seen as a success. So, we should be careful to be clear about what we’re counting as ‘success’ or ‘failure’ when we talk about default-success or default-failure mind-sets.
Hi Geoffrey! Yeah, good point—I agree that the right way to look at this is finer-grained, separating out prospects for success via different routes (gov regulation, informal coordination, technical alignment, etc).
Lauro—nice post; I especially appreciated your points about default-success versus default-failure mindsets.
Importantly, I think these defaults apply not just to (1) likelihood of being able to develop AGI and (2) likelihood of AGI imposing doom, but also to (3) likelihood that international regulations/pauses/moratoriums could succeed, and (4) likelihood that an anti-AI moral backlash could succeed, and lots of other related issues.
For example, some folks seem to think there’s a very strong default-failure outcome of trying to coordinate formal global regulation of AI, but the same folks (e.g. me!) may think there’s a fairly strong default-success outcome of trying to promote informal global moral stigmatization of AI.
Of course in each such case, what counts as ‘success’ or failure’ may depend heavily on one’s goals. For transhumanist Singularity enthusiasts who actually want humans to be replaced by AIs, a high ‘default-fail’ rate for AI alignment might be seen as actually a success; for libertarians who want every private citizen to have their own unregulated, unaligned AI, then a default-fail for global AI regulation would be seen as a success. So, we should be careful to be clear about what we’re counting as ‘success’ or ‘failure’ when we talk about default-success or default-failure mind-sets.
Hi Geoffrey! Yeah, good point—I agree that the right way to look at this is finer-grained, separating out prospects for success via different routes (gov regulation, informal coordination, technical alignment, etc).