Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?
Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).
Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?
I’m largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don’t think it’s a giant skew, because bioweapons are just one path to AI takeover).
ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.
Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).
If you feel moved by these things and are a good fit to work on them, that’s a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it’s a pretty apples-to-apples comparison to bio.
Is the 1-3% x-risk from bio including bio catastrophes mediated by AI (via misuse and/or misalignment? Is it taking into account ASI timelines?
Also, just comparing % x-risk seems to miss out on the value of shaping AI upside / better futures, s-risks + acausal stuff, etc. (also are you counting ai-enabled coups / concentration of power?). And relatedly the general heuristic of working on the thing that will be the dominant determinant of the future once developed (and which might be developed soon).
I’m largely deferring to ASB on these numbers, so he can potentially speak in more detail, but my guess is this includes AI-mediated misuse and accident (people using LLMs or bio design tools to invent nastier bioweapons and then either deliberately or accidentally releasing them), but excludes misaligned AIs using bioweapons as a tactic in an AI takeover attempt. Since the biodefenses work could also help with the latter, the importance ratio here is probably somewhat stacking the deck in favor of AI (though I don’t think it’s a giant skew, because bioweapons are just one path to AI takeover).
ASB has pretty short ASI timelines that are broadly similar to mine and these numbers take that into account.
If you feel moved by these things and are a good fit to work on them, that’s a much stronger reason to work on AI over bio than most people have. But the vast bulk of generalist EAs working on AI are working on AI takeover and more mundane misuse stuff that feels like it’s a pretty apples-to-apples comparison to bio.