Oops, my colleague checked again and the Future Perfect inclusions (Keley and Sigal) are indeed a mistake; OP hasn’t funded Future Perfect. Thanks for the correction. (Though see e.g. this similar critical tweet from OP grantee Matt Reardon.)
Re: Eric Neyman. We’ve funded ARC before and would do so again depending on RFMF/etc.
I am copying footnote 19 from the post above into this comment for easier reference/linking:
The “defense in depth” concept originated in military strategy (Chierici et al. 2016; Luttwak et al. 2016, ch. 3; Price 2010), and has since been applied to reduce risks related to a wide variety of contexts, including nuclear reactors (International Nuclear Safety Advisory Group 1996, 1999, 2017; International Atomic Energy Agency 2005; Modarres & Kim 2010; Knief 2008, ch. 13.), chemical plants (see “independent protection layers” and “layers of protection analysis” in Center for Chemical Process Safety 2017), aviation (see “Swiss cheese model” in Shappell & Wiegmann 2000), space vehicles (Dezfuli 2015), cybersecurity and information security (McGuiness 2021; National Security Agency 2002 & 2010; Amoroso 2011; Department of Homeland Security 2016; Riggs 2003; Lohn 2019), software development (Including for purposes beyond software security, e.g. software resilience; Adkins et al. 2020, ch. 8), laboratories studying dangerous pathogens (WHO 2020; CDC 2020; Rappert & McLeish 2007; National Academies 2006, which use different terms for “defense in depth”), improvised explosive devices (see “web of prevention” in Revill 2016), homeland security (Echevarria II & Tussing 2003), hospital security (see “layers of protection” in York & MacAlister 2015), port security (McNicholas 2016, ch. 10), physical security in general (Patterson & Fay 2017, ch. 11), control system safety in general (see “layers of protection” in Barnard 2013; Baybutt 2013), mining safety (Bonsu et al. 2016), oil rig safety (see “Swiss cheese model” in Ren et al. 2008), surgical safety (Collins et al. 2014), fire management (Okray & Lubnau II 2003, pp. 20-21), health care delivery (Vincent et al. 1998), and more. Related (and in some cases near-identical) concepts include the “web of prevention” (Rappert & McLeish 2007; Revill 2016), the “Swiss cheese model” (Reason 1990; Reason et al. 2006; Larouzee & Le Coze 2020), “layers of protection” (Center for Chemical Process Safety 2017), “multilayered defense” or “diversity of defense” (Chapple et al. 2018, p. 352), “onion skin” or “lines of defense” (Beaudry 2016, p. 388), or “layered defense” (May et al. 2006, p. 115). Example partially-overlapping “defense layers” for high-stakes AI development and deployment projects might include: (1) tools for blocking unauthorized access to key IP, e.g. secure hardware enclaves for model weights, (2) tools for blocking unauthorized use of developed/trained IP, akin to the PALs on nuclear weapons, (3) tools and practices for ensuring safe and secure behavior by the humans with access to key IP, e.g. via training, monitoring, better interfaces, etc., (4) methods for scaling human supervision and feedback during and after training high-stakes ML systems, (5) technical methods for gaining high confidence in certain properties of ML systems, and properties of the inputs to ML systems (e.g. datasets), at all stages of development (a la Ashmore et al. 2019), (6) background checks & similar for people being hired or promoted to certain types of roles, (7) legal mechanisms for retaining developer control of key IP in most circumstances, (8) methods for avoiding or detecting supply chain attacks, (9) procedures for deciding when and how to engage one’s host government to help with security/etc., (10) procedures for vetting and deciding on institutional partners, investors, etc. (11) procedures for deciding when to enter into some kinds of cross-lab (and potentially cross-state) collaborations, tools for executing those collaborations, and tools for verifying another party’s compliance with such agreements, (12) risk analysis and decision support tools specific to high-stakes AI system developers, (13) whistleblowing / reporting policies, (14) other features of high-reliability organizations, a la Dietterich (2018) and Shneiderman (2020), (15) procedures for balancing concerns of social preference / political legitimacy and ethical defensibility, especially for deployment of systems with a large and broad effect on society as a whole, e.g. see Rahwan (2018); Savulescu et al. (2021), (16) special tools for spot-checking / double-checking / cross-checking whether all of the above are being used appropriately, and (17) backup plans and automatic fail-safe mechanisms for all of the above.