Currently work in AI Law, and fulfil a safety & legislation role at a major technology company
CAISID
These are some interesting thoughts.
I think OSINT is a good method for varying types of enforcement, especially because the general public can aid in the gathering of evidence to send to regulators. This happens a lot in the animal welfare industry AFAIK, though someone with experience here please feel free to correct me. I know Animal Rising recently used OSINT to gather evidence of 280 legal breaches from the livestock industry which they handed to DEFRA which is pretty cool. This is especially the case given that these were RSPCA-endorsed farms so it showed that the stakeholder vetting (pun unintended) was failing. This only happened 3 days ago, so the link may expire, but here is an update.
For AI this is often a bit less effective, but is still useful. A lot of the models in nuclear, policing, natsec, defence, or similar are likely to be protected in a way that makes OSINT difficult, but I’ve used it before for AI Governance impact. The issue is that even if you find something, a DSMA-Notice or similar can be used to stop publication. You said “Information on AI development gathered through OSINT could be misused by actors with their own agenda” which is almost word for word the reason the data is often, haha. So you’re 100% right that in AI Governance in these sectors OSINT can be super useful but may fall at later hurdles.
However, commercial AI is much more prone to OSINT because there’s no real lever to stop you publishing OSINT information. You can usually in my experience use the supply chain for a fantastic source of OSINT, depending on how dedicated you are. That’s been a major AI Governance theme in the instances I’ve been involved in on both sides of this.
There’s quite a bit of work in AI IP, but a lot of it is siloed. The legal field (particularly civil law) doesn’t do a fantastic job of making their content readable for non-legal people, so the research and developments can be a bit hit and miss in terms of informing people.
The tight IP laws don’t always help things (as you rightly mention). It can be good for helping keep models in-house but to be honest that usually harms rather than helps risk mitigation. I do a lot of AI risk mitigation for clients and I’ve been in the courtroom a few times where this has been a major issue in forcing companies to show or reduce their risk, and is a fairly big issue in both compliance and in criminal law.
IP in particular is a hard tightrope to walk, AI-wise.
This is a good read. I’ve been thinking a lot about how Monopsonies affect regulation, and this ties in with that which is useful.
It’s interesting someone voted ‘Disagree’ to this, and I would be interested in hearing why—even if that’s via inbox. Always happy to hear dissenting ideas.
I guess it depends on which area of AI Governance you research in. I’m almost entirely front-end, so a lot of my research is talking to people it will impact and trialling how different governance mechanisms might actually work in practice.
I guess spitballing it would be:30% reading draft or upcoming governance changes
50% discussing with end users or using my own experience to highlight issues or required changes
20% writing those responses up for either the legislators or the organisations impacted
New OGL and ITAR changes are shifting AI Governance and Policy below the surface: A simplified update
Hm. The closest things I can think of would either be things like inciting racial hatred or hate speech (ie not physical, no intent for crime, but illegal). In terms of research, most research isn’t illegal but is usually tightly regulated by participating stakeholders, ethics panels, and industry regulations. Lots of it is stakeholder management too. I removed some information from my PhD thesis at the request of a government stakeholder, even though I didn’t have to. But it was a good idea to ensure future participation and I could see the value in the reasoning. I’m not sure there was anything they could do legally if I had refused, as it wasn’t illegal per se.
The closest thing I can think of to your example is perhaps weapons research. There’s nothing specifically making weapons research illegal, but it would be an absolute quagmire in terms of not breaking the law. For example sharing the research could well fall under anti-terrorism legislation, and creating a prototype would obviously be illegal without the right permits. So realistically you could come up with a fantastic new idea for a weapon but you’d need to partner with a licensing authority very, very early on or risk doing all of your research by post at His Majesty’s pleasure for the next few decades.
I have in the past worked in some quite heavily regulated areas with AI, but always working with a stakeholder who had all the licenses etc so I’m not terribly sure how all that works behind the scenes.
You have some interesting questions here. I am a computer scientist and a legal scholar, and I work a lot with organisations on AI policy as well as helping to create policy too. I can sympathise with a lot of the struggles here from experience. I’ll focus in some some of the more concrete answers I can give in the hopes that they are the most useful. Note that this explanation isn’t from your jurisdiction (which I assume from the FBI comment is USA) but instead from England & Wales, but as they’re both Common Law systems there’s a lot of overlap and many key themes are the same.
For example, one problem is: How do you even define what “AGI” or “trying to write an AGI” is?This is actually a really big problem. There’s been a few times we’ve trialled new policies with a range of organisations and found that how those organisations interpret the term ‘AI’ makes a massive difference to how they interpret, understand, and adhere to the policy itself. This isn’t even a case of bad faith, more just people trying to attach meaning to a vague term and then doing their best but ultimately doing so in different directions. A real struggle is that when you try to get more specific, it can actually end up being less clear because the further you zoom in, the more you accidentally exclude. It’s a really difficult balancing act—so yes, you’re right. That’s a big problem.
I’m wondering how much this is actually a problem, though. As a layman, as far as I know there could be existing government policies that are somewhat comparably difficult to evaluate.
Oh, tons. In different industries, in a variety of forms. Law and policy can be famously hard to interpret. Words like ‘autonomous’, ‘harm’, and ‘intend’ are regular prickly customers.
Many judicial decisions related to crimes, as I vaguely understand it, depend on intentionality and belief——e.g. for a killing to be a murder, the killer must have intended to kill and must not have believed on reasonable grounds that zer life was imminently unjustifiedly threatened by the victim.
This is true to an extent. So in law you often have the actus reus (what actually happened) and the mens rea (what the person intended to happen). The law tends to weigh the mens rea quite heavily. Yes, intent is very important—but more so provable intent. Lots of murder cases get downgraded to manslaughter for a better chance at a conviction. Though to answer your question yes at a basic level criminal law often relates to intention and belief. Most of the time this is the objective belief of the average person, but there are some cases (such as self-defence in your example) where the intent is measured against the subjective belief of that individual in those particular circumstances.What are some crimes that are defined by mental states that are even more difficult to evaluate? Insider trading? (The problem is still very hairy, because e.g. you have to define “AGI” broadly enough that it includes “generalist scientist tool-AI”, even though that phrase gives some plausible deniability like “we’re trying to make a thing which is bad at agentic stuff, and only good at thinky stuff”. Can you ban “unbounded algorithmic search”?)
Theft and assault of the everyday variety are actually some of the most difficult to evaluate really, since both require intent to be criminal and yet intent can be super difficult to prove. In the context of what you’re asking, ‘plausible deniability’ is often a strategy chosen when accused of a crime (i.e making the prosecution prove something non-provable which is an uphill battle) but ultimately it would come down to a court to decide. You can ban whatever you want, but the actual interpretation could only really be tested in that matter. In terms of broad language the definitions of words is often a core point of contention in court cases so likely it would be resolved there, but honestly from experience the overwhelming majority of issues never reach court. Neither side wants to take the risk, so usually the company or organisation backs off and negotiates a settlement. The only times things really go ‘to the hilt’ is for criminal breaches which require a very severe stepping over the mark.
Bans on computer programs. E.g. bans on hacking private computer systems. How much do these bans work? Presumably fewer people hack their school’s grades database than would without whatever laws there are; on the other hand, there’s tons of piracy.
In the UK the Computer Misuse Act 1990 is actually one of the oldest bits of computer-specific legislation and is still effective today after a few amendments. It’s mostly due to the broadness of the law and the fact that evidence is fairly easy to come by and that intent with those is fairly easy to prove. It’s beginning to struggle in the new digital era though, thanks to totally unforeseen technologies like generative AI and blockchain.
Some bits of legislation have been really good at maintaining bans though. England and Wales have a few laws against CSAM which included the term ‘pseudo-photography’ which actually applies to generative AI and so someone who launched an AI for that purpose would still be guilty of an offence. It depends what you mean by ‘ban’ as a ban in legislation can often function much differently than a ban from, for example, a regulator.
Bans on conspiracies with illegal long-term goals. E.g. hopefully-presumably you can’t in real life create the Let’s Build A Nuclear Bomb, Inc. company and hire a bunch of nuclear scientists and engineers with the express goal of blowing up a city. And hopefully-presumably your nuke company gets shut down well before you actually try to smuggle some uranium, even though “you were just doing theoretical math research on a whiteboard”. How specifically is this regulated? Could the same mechanism apply to AGI research?Nuclear regulation is made up of a whole load of different laws and policy types too broad to really go into here, but essentially what you’re describing there is less about the technology and more about the goal. That’s terrorism and conspiracy to commit murder just to start off with, no matter whether you use a nuke or an AGI or a spatula. If your question centres more on ‘how do we dictate who is allowed access to dangerous knowledge and materials’ that’s usually a licensing issue. In theory you could have a licensing system around AGIs, but that would probably only work for a little while and would be really hard to implement without buy-in internationally.
If you’re specifically interested in how this example is regulated, I can’t help you in terms of US law beyond this actually quite funny example of a guy who attempted a home-built nuclear reactor and narrowly escaped criminal charges—however some UK-based laws include the Nuclear Installations Act 1965 and much of the policy from the Office for Nuclear Regulation (ONR).
Hopefully some of this response is useful!
Yeah, that’s fixed for me :)
No worries :)
This is a useful list, thank you for writing it.
In terms of:UK specific questions
Could the UK establish a new regulator for AI (similar to the Financial Conduct Authority or Environment Agency)? What structure should such an institution have? This question may be especially important because the UK civil service tends to hire generalists, in a way which could plausibly make UK AI policy substantially worse.
I wrote some coverage here of this bill which seeks to do this, which may be useful for people exploring the above. Also well worth watching and not particularly well covered right now is how AUKUS will affect AI Governance internationally. I’m currently preparing a deeper dive on this as a post, but for people researching UK-specific governance it’s a good head start to look at these areas as ones where not a lot of people are directing effort.
This is really interesting, thank you. As an aside, am I the only one getting an unsecured network warning for nonlinear.org/network?
I wouldn’t be disheartened. I have considerable experience in AI safety and my current role has me advising decision-makers in the topic in major tech organisations. I’ve had my work cited by politicians in parliament twice last year.
I’ve also been rejected for every single AI Safety fellowship or scholarship that I’ve ever applied for. That’s every advertised one, every single year, for at least 5 years. My last rejection, actually, was on March 4th (so a week ago!). A 0% success rate, baby!
Rejected doesn’t mean you’re bad. It’s just that there’s maybe a dozen places for well over a thousand people, and remember these places have a certain goal in mind so you could be the perfect candidate but at the wrong career stage, or location, or suchlike.I’d say keep applying, but also apply outside the EA sphere. Don’t pigeonhole yourself. As others mentioned, keep developing skills but I’d also add that you may never get accepted and that’s okay. It’s not a linear progression where you have to get one of these opportunities before you make impact. Check out other branches.
Inbox me if you feel you need more personal direction, happy to help :)
This won’t be the answer you’re looking for but honestly, time permitting, I just take a day or three off. I find when I’m relaxing, giving myself space to breathe and think without force, that’s when creativity starts to flow again and ideas come in. Obviously this isn’t deadline-friendly!
This is a really interesting podcast—particularly the section with the discussion on foundation models and cost analysis. You mention a difficulty on exploring this. If you ever want to explore it, I’m happy to give some insight via inbox because I’ve done a bit of work in industry in this area that I can share.
“What makes you stop posting?” could be reframed as “What makes you post in the first place?”, and “What might make it easier?” could be reframed as “What might make you publish posts that were more challenging for you (practically or emotionally)?”
The quality of many forum posts is very high, including from people who are not paid by a research org to write them and have no direct connection to the community (such as these two). So even if you only factor in the time cost, you would still have to suppose some pretty large benefits to explain why people write them.
This was a really good point, and it made me think for quite a while. I’ve posted on the forum a lot since re-entering the EA community (to the point I’ve consciously tried to do it less so it’s not spammy!), but I’ve never really thought about why I put so much effort into posts or, indeed, my comments. There’s not much of a difference between the two, really, since one of my recent comments on someone’s post was 1,207 words long haha. All in good faith, though!
I don’t gain anything from posting. I have a good job outside of EA, I’m not part of any EA groups, and I don’t particularly want or need anything from anyone in EA. So there’s nothing concrete there. I’ve never really thought about it, but it boils down to sharing knowledge. If the things I know about can help someone else somewhere do good better, or address a problem, or whatever then I like the idea that maybe my posts are useful to people. My specialist area is also kind of niche and difficult to enter, so I like the idea of making it more understandable and approachable.
I never get any karma really, or even high reads, but I do get high retention so people (~50%) tend to read my posts all the way through which I really like. So that ties in with what I think my core motivation is.
Obviously I think it’s good to make sure criticism is of ideas and not people/their values, and to be polite in a common sense way such as trying to give criticism as a compliment sandwich.
I’m a huge fan of this. It’s rare, but if ever I really disagree with someone’s post I’ll always highlight what I liked about it too. In my experience aside from being polite, it also results in better conversation.
Edit: Grammar
My PhD was in this area so I’d be super interested in hearing more about your thoughts on this. Looking forward to seeing this post if you decide on it :)
I think this would be an interesting post to read. I’m often surprised that existing AI disasters with considerable death/harm counts aren’t covered in more detail in people’s concerns. There’s been a few times where AI systems have, via acting in a manner which was not anticipated by its creators, killed or otherwise injured very large numbers of people which exceed any air disaster or similar we’ve ever seen. So the pollution aspect is quite interesting to me.
Posts about any of the knock-on or tertiary effects of AI would be interesting.
One of the things I think is important to remember when it comes to Defence is that the idea of boundaries between military technology and civilian technology hasn’t really existed since the 1970s. A vast amount of defence technology now is dual-use, meaning that even people working in (for example) the video games or automotive industry are, in a potentially unaware manner, designing hardware and software for the defence industry. And funnily enough vice-versa. So that line gets fuzzy fast. It sounds like your work is dual-use so it might be a bit complex for you to work through, in terms of ethics.
As for the hard ethics there, it depends on your own ethics and what you want to accomplish with the work. If the finance is the main draw, then that’s it’s own thing for only you to answer. If you want to make ‘wider impact’ in a positive way, then that’s a whole other thing that again I guess falls to you and relates largely to the role. There’s plenty of people work with stakeholders they aren’t exactly stoked about in order to achieve a larger goal.
I asked myself a similar question the first time I had the opportunity to do AI Governance with a police force, as someone who was from a background which often has friction with police. Some mixed feelings there. I eventually decided that the chance to make positive impact was worth it, but plenty of other people might feel otherwise.
In my job search until this point I have refused to apply to jobs at defense contractors and have turned down interviews from recruiters because it just seemed icky
I would end by saying that if something makes you feel ‘icky’ it might not be worth doing it, no matter what the more neutral ethics say. I’m happy with the lines I have drawn, and it’s important that you are as well. Not sure any of us can help with that :)
I agree with this and will add a (potentially unpopular) caveat of my own—work a ‘normal’ job outside of your EA interest area altogether if possible. Absolutely fantastic applicable experience to a whole range of stuff.
I hire for AI-related roles sometimes and one of the main things I look for when hiring AI Safety roles is experience doing other work. Undergrad to Postgrad to Academic Role is great for many, but experience working in a ‘normal’ work environment is super valuable and is something I look for. It seems super neglected for consideration in recruiting too. For me it’s a huge green flag.
Just understanding how large organisations work, how stuff like logistics and supply chains work, the ‘soft knowledge’ often missing from a pure research career is insanely valuable in many sectors.
It’s almost frowned upon to the extent people apologise for it. “I worked 2 years in a warehouse, but not because I don’t care about AI Safety, it’s just I needed the money”—like dude, actual logistics experience is why I picked you for interview!
Your mileage may vary obviously, and the AI Safety roles I hire for are for ‘frontline impact’ so less research and more stakeholder interaction so those soft skills are more useful, but too many people think stepping outside the “academic beeline” is some kind of failure.
It’s also worth highlighting I do super impactful AI Safety work now, leading a team that does some amazing frontline work, and in the past have been rejected from every single EA grant, EA fellowship, and EA job I’ve ever applied to :) That can be demoralising, but obviously wasn’t related to my value! Perhaps just fit and luck :)