Was there a $1bn commitment attributed to Musk? The OpenAI wikipedia article says: “The organization was founded in San Francisco in 2015 by Sam Altman, Reid Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel and others,[8][1][9] who collectively pledged US$1 billion.”
MarkusAnderljung
Great! Looking forward to seeing it!
I suspect that it wouldn’t be that hard to train models at datacenters outside of CA (my guess is this is already done to a decent extent today: 1⁄12 of Google’s US datacenters are in CA according to wiki). That models are therefore a pretty elastic regulatory target.
Data as a regulatory target is interesting, in particular if it transfers ownership or power over the data to data subjects in the relevant jurisdiction. That might e.g. make it possible for CA citizens to lodge complaints about potentially risky models being trained on data they’ve produced. I think the whole domain of data as a potential lever for AI governance is worthy of more attention. Would be keen to see someone delve into it.
I like the thought that CA regulating AI might be seen as a particularly credible signal that AI regulation makes sense and that it might therefore be more likely to produce a de jure effect. I don’t know how seriously to take this mechanism though. E.g. to what extent is it overshadowed by CA being heavily Democrat. The most promising way to figure this out in more detail to me seems like talking to other state legislators and looking at the extent to which previous CA AI-relevant regulation or policy narratives has seen any diffusion. Data privacy and facial recognition stand out as most promising to look into, but maybe there’s also stuff wrt autonomous vehicles.
Thanks!
That sounds like really interesting work. Would love to learn more about it.
“but also because a disproportionate amount of cutting-edge AI work (Google, Meta, OpenAI, etc) is happening in California.” Do you have a take on the mechanism by which this leads to CA regulation being more important? I ask because I expect most regulation in the next few years to focus on what AI systems can be used in what jurisdictions, rather than what kinds of systems can be produced. Is the idea that you could start putting in place regulation that applies to systems being produced in CA? Or that CA regulation is particularly likely to affect the norms of frontier AI companies because they’re more likely to be aware of the regulation?
Supplement to “The Brussels Effect and AI: How EU AI regulation will impact the global AI market”
We’ve already started to do more of this. Since May, we’ve responded to 3 RFIs and similar (you can find them here: https://www.governance.ai/research): the NIST AI Risk Management Framework; the US National AI Research Resource interim report; and the UK Compute Review. We’re likely to respond to the AI regulation policy paper. Though we’ve already provided input to this process via Jonas Schuett and I being on-loan to the Brexit Opportunities Unit to think about these topics for a few months this spring.
I think we’ll struggle to build expertise in all of these areas, but we’re likely to add more of it over time and build networks that allow us to input in these other areas should we find doing so promising.
“I’d suggest being discerning with this list”
Definitely agree with this!
How technical safety standards could promote TAI safety
Sounds right!
One thing you can do is collect some demographic variables on non-respondents and see whether there is self-selection bias on those. You could then try to see if the variables that see self-selection correlate with certain answers. Baobao Zhang and Noemi Dreksler did some of this work for the 2019 survey (found in D1/page 32 here: https://arxiv.org/pdf/2206.04132.pdf ).
Really excited to see this!
I noticed the survey featured the MIRI logo fairly prominently. Is there a way to tell whether that caused some self-selection bias?
In the post, you say “Zhang et al ran a followup survey in 2019 (published in 2022)1 however they reworded or altered many questions, including the definitions of HLMI, so much of their data is not directly comparable to that of the 2016 or 2022 surveys, especially in light of large potential for framing effects observed.” Just to make sure you haven’t missed this: we had the 2016 respondents who also responded to the 2019 survey receive the exact same question they were asked in 2016, including re HLMI and milestones. (I was part of the Zhang et al team)
Hi Lexley, Good question. Kirsten’s suggestions are all great. To that, I’d add:
Try to work as a research assistant to someone who you think is doing interesting work. Quite often, more so than other roles, RA roles are quite often not advertised and set up on a more ad hoc basis. Perhaps the best route in is to read someone’s work and
Another thing you could do is to try to take a stab independently on some important-seeming question. You could e.g. pick a research question hinted at in a paper/piece (some have a section specifically with suggestions for further work), mentioned in a research agenda (e.g. Dafoe 2018), or in lists of research ideas (GovAI collated one here and Michael Aird, I think, sporadically updates this collection of lists of EA-relevant research questions).
My impression is that you can join the AGI Safety Fundamentals as an undergrad.
You could also look into the various “ERIs”: SERI, CHERI, CERI, and so on.
As for GovAI, we have in the past engaged undergrads as research assistants and I could imagine us taking on particularly promising undergrads for the GovAI Fellowship. However, overall, I expect our comparative advantage will be working with folks who either have significant context on AI governance or who have relevant experience from some other domain. It may also lay in producing writing that can help people navigate the field.
Announcing the GovAI Policy Team
Thanks Jeffrey! I hope we’re a community where it doesn’t matter so much whether you think we suck. If you think the EA community should engage more with nuclear security issues and should do so in different ways, I’m sure people would love to hear it. I would! Especially if you’d help answer questions like: How much can work on nuclear security reduce existential risk? What kind of nuclear security work is most important from an x-risk perspective?
I’d love to hear more about what your concerns and criticisms are. For example, I’d love to know: Is the Scoblic post the main thing that’s informing your impression? Do you have views on this set of posts about the severity of a US-Russia nuclear exchange from Luisa Rodriguez (https://forum.effectivealtruism.org/s/KJNrGbt3JWcYeifLk)? Is there effective altruist funding or activity in the nuclear security space that you think has been misguided?
All things being equal, I’d recommend you publish in journals that are prestigious in your particular field (though it might not be worth the effort). In international relations / political science (which I know best) that might be e.g.: International Organization, International Security, American Journal of Political Science, PNAS.
Other journals that are less prestigious but more likely to be keen on AI governance work include: Nature Machine Intelligence, Global Policy, Journal of AI Research, AI & Society. There are also a number of conferences to consider: AIES, FAccT, workshops at big ML conferences like NeurIPS or ICML. Another thing to look out for is journals with AI governance/policy special issues.
I find that one good strategy for finding a suitable journal is looking for articles similar to what you want to publish and seeing where they’ve been published. You can then e.g. refer to those in your letter to the editors, highlighting how your work is relevant to their interests.
Overall, I think it’s not that surprising that this change is being proposed and I think it’s a fairly reasonable. However, I do think it should be complemented with duties to avoid e.g. AI systems being put to high-risk uses without going through a conformity assessment and that it should be made clear that certain parts of the conformity assessment will require changes on the part of the producer of a general system if that’s used to produce a system for a high-risk use.
In more detail, my view is that the following changes should be made: Goal 1: Avoid general systems being without the appropriate regulatory burdens kicking in. There are two kinds of cases one might worry about: (i) general systems might make it easier to produce a system that should either be covered by the transparency requirements (e.g. if your system is a chatbot, you need to tell the user that) or the high-risk requirements, leading to more such systems being put on the market without them being registered.
Proposed solution: Make it the case that providers of general systems must do certain checks on how their model is being used and whether it is being used for high risk uses without that AI system having been registered or having gone through the conformity assessment. Perhaps this would be done by giving the market surveillance authorities (MSAs) the right to ask providers of general models about certain information about how the model is being used. In practice, it could look as follows: the provider of the general system could have various ways to try to detect whether someone is using their system for something high risk (companies like OpenAI are already developing tools and systems to do this). If they detect such a use, they are required to check that against the database of high risk AI systems deployed on the EU market. If there’s a discrepancy, they must report it to the MSA and share some of the relevant information as evidence.
(ii) There’s a chance that individuals using general systems for high-risk uses without placing anything on the market will not be covered by the regulation. That is, as the regulation is currently designed, if a company where to use public CCTV footage to assess the number of women vs. men walking down a street, I believe that would be a high risk use. But if an individual does it, it might not count as a high risk use because nothing is placed on the market. This could end up being an issue, especially if word about these kinds of use cases spreads. Perhaps a more compelling example would be people starting to use large language models as personal chat bots. The proposed regulation wouldn’t require the provider of the LLM to add any warnings about how this is simply a chatbot, even if the user starts e.g. using it as a therapist or for medical advice.
Proposed solution: My guess is that the solution is that the provision suggested above is expanded to also look for individuals using the systems for high risk or limited risk uses and that they are required to stop such use.
Goal 2: (perhaps most important) Try to make it the case that crucial and appropriate parts of the conformity assessment will require changes on the part of the producer of the general system.
This could be done by e.g. making it the case that the technical documentation requires information that only the producer of the general model would have. It would plausibly already be the case with regards to the data requirements. It would also plausibly be the case regarding robustness. It seems worth making sure of those things. I don’t know if that’s a matter of changing the text of the legislation itself or about how the legislation will end up being interpreted.
One way to make sure that this is the case is to require that deployers only use general models that have gone through a certification process or that has also passed the conformity assessment (or perhaps a lighter version). I’m currently excited about the latter.
Why am I not excited about something more onerous on the part of the provider of the general system?
I think we can get a lot of the benefits of providers of general systems needing to meet certain requirements without them having to go through the conformity assessment themselves. I expect there to be lots of changes that need to be made to the general model to allow the deployer to complete their conformity assessment. If I try to use GPT-3 to create a system that rates essays (ignoring for now that OpenAI currently prohibit this in their Terms of Use), I’ll need to make sure that the system meets certain robustness requirements, I need to be able to explain to some human overseer how it works, and so on. To meet those requirements, I think that’ll require changes on the part of the developer of the general system. As such, I think the legal requirements will have an effect on general AI systems produced by big tech companies. To illustrate the point, if EU car manufacturers are required to use less carbon-intensive steel, that would have a large impact on the carbon-intensity of steel production in the EU, even though the steel manufacturers weren’t directly targeted by the legislation.
Introducing requirements on all general systems that can be used on the EU market seems hugely onerous to me. So much so that it would probably be a bad idea. I think that companies could fairly easily go from offering a general system on the EU market to offering a general-system-that-you’re-not-allowed-to-use-for-high-risk-uses. This could for example be done by adjusting the terms and conditions (OpenAI’s API usage guidelines already disallows most if not all high-risk uses as defined in the AI Act) or writing in big font somewhere “Not intended for high-risk uses as defined by the EU’s AI Act”. I worry that introducing requirements on general systems on masse would lead to that being the default response and that it wouldn’t deliver much benefit beyond what we’d get if the changes I gestured at above were made.
We’ve now relaunched. We wrote up our current principles with regards to conflicts of interest and governance here: https://www.governance.ai/legal/conflict-of-interest. I’d be curious if folks have thoughts, in particular @ofer.
Thanks for the post! I was interested in what the difference between “Semiconductor industry amortize their R&D cost due to slower improvements” and “Sale price amortization when improvements are slower” are. Would the decrease in price stem from the decrease in cost as companies no longer need to spend as much on R&D?
Thanks! What happens to your doubling times if you exclude the outliers from efficient ML models?
Semafor reporting confirms your view. They say Musk promised $1bn and gave $100mn before pulling out.