Matthijs Maas
Senior Research Fellow, Institute for Law & AI
Research Affiliate, Leverhulme Centre for the Future of Intelligence, University of Cambridge.
https://www.matthijsmaas.com/ | https://linktr.ee/matthijsmaas
Matthijs Maas
Senior Research Fellow, Institute for Law & AI
Research Affiliate, Leverhulme Centre for the Future of Intelligence, University of Cambridge.
https://www.matthijsmaas.com/ | https://linktr.ee/matthijsmaas
(apologies for very delayed reply)
Broadly, I’d see this as:
‘anticipatory’ if it is directly tied to a specific policy proposal or project we want to implement (‘we need to persuade everyone of the risk, so they understand the need to implement this specific governance solution’),
‘environment-shaping’ (aimed at shaping key actors’ norms and/or perceptions), if we do not have a strong sense of what policy we want to see adopted, but we would like to inform these actors to come up with the right choices themselves, once convinced.
Thanks for this analysis, I found this a very interesting report! As we’ve discussed, there are a number of convergent lines of analysis, which Di Cooke, Kayla Matteucci and I also came to for our research paper ‘Military Artificial Intelligence as Contributor to Global Catastrophic Risk’ on the EA Forum ( link ; SSRN).
Although by comparison we focused more on the operational and logistical limits to producing and using LAWS swarms en masse, and we sliced the nuclear risk escalation scenarios slightly different. We also put less focus on the question of ‘given this risk portfolio, what governance interventions are more/less useful’.
This is part of ongoing work (including a larger project and article that also examines the military developers/operators angle on AGI alignment/misuse risks, and the ‘arsenal overhang (extant military [& nuclear] infrastructures) as a contributor to misalignment risk’ arguments (for the latter, see also some of Michael Aird’s discussion here), though that had to be cut from this chapter for reasons of length and focus.
strong +1 to everything Markus suggests here.
Other journals (depending on the field) could include Journal of Strategic Studies, Contemporary Security Policy, Yale Journal of Law & Technology, Minds & Machines, AI & Ethics, ‘Law, Innovation and Technology’, Science and Engineering Ethics, Foresight, …
As Markus mentions, there are also sometimes good disciplinary journals that have special issue collections on technology—those can be opportunities to get it into high-profile journals even if they are usually more aversive to tech-focused pieces (e.g. I got a piece into Melbourne Journal of International Law); though it really depends what audiences you’re trying to reach / position your work into.
Thanks Nuño! I don’t think I’ve got well thought out views on relative importance or rankings of these work streams; I’m mostly focused on understanding scenarios in which my own work might be more or less impactful (I also should note that if some lines of research mentioned here seem much more impactful, that may be more a result of me being more familiar with them, and being able to give a more detailed account of what the research is trying to get at / what threat models and policy goals it is connected to).
On your second question, as with other academic institutes, I believe it’s actually both doable and common for donors or funders to support some of CSER’s themes or lines of work but not others. Some institutional funders (e.g. for large academic grants) will often focus on particular themes or risks (rather than e.g. ‘X-risk’ as a general class), and therefore want to ensure their funding is going to just that work. The same has been the case for individual donations, to support certain projects we’ve done, I think.
[ED: -- see link to CSER donation form. Admittedly, this web form doesn’t clearly allow you to specify different lines of work to support, but in practice this could be arranged in a bespoke way—by sending an email to director@cser.cam.ac.uk indicating what area of work one would want to support.]
The Legal Priorities Project’s research agenda also includes consideration of s-risks, alongside with x-risks and other type of trajectory changes, though I do agree this remains somewhat under-integrated with other parts of the long-termist AI governance landscape (in part, I speculate, because the perspective might face [even] more inferential distance from the concerns of AI policymakers than x-risk focused work).
NC3 early warning systems are susceptible to error signals, and the chain of command hasn’t always been v secure (and may not be today), so it wouldn’t necessarily be that hard for a relatively unsophisticated AGI to spoof and trigger a nuclear war:* certainly easier than many other avenues that would involve cracking scientific problems.
(*which is another thing from hacking to the level of “controlling” the arsenal and being able to retarget it at will, which would probably require a more advanced capability, where the risk from the nuclear avenue might perhaps be redundant compared to risks from other, direct avenues).
Incidentally, at CSER I’ve been working with co-authors on a draft chapter that explores “military AI as cause or compounder of global catastrophic risk”, and one of the avenues also involves discussion of what we call “weapons/arsenal overhang”, so this is an interesting topic that I’d love to discuss more
Some of these (hazard, vulnerability, exposure) are discussed in the context of x-risks in this typology: https://www.sciencedirect.com/science/article/abs/pii/S0016328717301623 [open-access at https://www.researchgate.net/publication/324688255_Governing_Boring_Apocalypses_A_New_Typology_of_Existential_Vulnerabilities_and_Exposures_for_Existential_Risk_Research ]
To some extend, I’d prefer not yet to anchor people too much, before finishing the entire sequence. I’ll aim to circle around later and have more deep reflection on my own commitments. In fact, one reason why I’m doing this project is that I notice I have rather large uncertainties over these different theories myself, and want to think through their assumptions and tradeoffs.
Still, while going into more detail on it later, I think it’s fair that I provide some disclaimers about my own preferences, for those who wish to know them before going in:
[preferences below break]
… … … …
TLDR: my currently (weakly held) perspective is something like ’(a) as default, pursue portfolio approach consisting of interventions from Exploratory, Prosaic Engineering, Path-setting, Adaptation-enabling, Network-building, and Environment-shaping perspectives: (b) under extremely short timelines and reasonably good alignment chances, switch to Anticipatory and Pivotal Engineering; (c) under extremely low alignment success probability, switch to Containing;”
This seems grounded in a set of predispositions / biases / heuristics that are something like:
Given I’ve quite a lot of uncertainty about key (technical and governance) parameters, I’m hesitant to commit to any one perspective and prefer portfolio approaches. —That means I lean towards strategic perspectives that are more information-providing (Exploratory), more robustly compatible with- and supportive of many others (Network-building, Environment-shaping), and/or more option-preserving and flexible (Adaptation-enabling); —conversely, for these reasons I may have less affinity for perspectives that potentially recommend far-reaching, hard-to-reverse actions under limited information conditions (Pivotal Engineering, Containing, Anticipatory);
My academic and research background (governance; international law) probably gives me a bias towards the more explicitly ‘regulatory’ perspectives (Anticipatory, Path-setting, Adaptation-enabling), especially in multilateral version (Coalitional); and a bias against perspectives that are more exclusively focused on the technical side alone (eg both Engineering perspectives), pursue more unilateral actions (Pivotal Engineering, Partisan), or which seek to completely break or go beyond existing systems (System-changing)
There are some perspectives (Adaptation-enabling, Containing) that have remained relatively underexplored within our community. While I personally am not yet convinced that there’s enough ground to adopt these as major pillars for direct action, from an Exploratory meta-perspective I am eager to see these options studied in more detail.
I am aware that under very short timelines, many of these perspectives fall away or begin looking less actionable;
[ED: I probably ended up being more explicit here than I intended to; I’d be happy to discuss these predispositions, but also would prefer to keep discussion of specific approaches concentrated in the perspective-specific posts (coming soon).