I am a computer scientist (to degree level) and legal scholar (to PhD level) working at the intersection between technology and law. I currently work in a legislation role at a major technology company, and as a consultant to government and industry on AI Law, Policy, Governance, and Regulation.
CAISID
This was a super interesting read.
One of the major failures I often see, working in policy, is also a lack of actual real-world experience. There are a huge number of upsides from the Undergrad to PhD to Academic pipeline, but one downside is that many people who have never actually worked in-industry or in any non-academic role have very little idea of just how much the ‘coalface’ differs from what is written on paper, or just how cumbersome even minor policy shifts can be.
I judged an AI regulation/policy contest last year and my number one piece of feedback to people was that they hadn’t considered the human element of the ‘end-users’ of policy. For example can the people/orgs this new regulation or governance impacts actually not only understand what the regulations want from them, but how they can demonstrate compliance—and can they even comply? Not all orgs are impacted equally.
I agree then that your pointers towards stakeholder management and social skills are very important, as is seemingly irrelevant experience working outside of research. One of the best policy researchers I know used to work in a warehouse, and that knowledge of complex socio-logistic environments within large organisations helps him tremendously, even though on paper that was an irrelevant role.
UK AI Bill Analysis & Opinion
This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems—the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.
I often advise to think the cost and effort difference between “I have to re-wallpaper one wall” as opposed to “I need to tear my house down to the foundations and rebuild it”.
That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people—particularly early careers researchers.
I’m going to stick to forum norms and assume good faith in this question. There is also every chance I am misinterpreting it (it’s very early!).
I’m curious as to what has prompted “my guess is suicide hotline volunteers aren’t that stellar”, and the assumption that for whatever reason people in this forum would be any better? It seems the entire premise here rests on those two unevidenced assertions, which potentially explains why this path isn’t recommended more.
This is a very interesting and timely proposal. Well done!
I am particularly interested in the idea of the meeting spaces. Given the nature of some policy conversations, it can often be difficult to schedule meetings on-site for locations—particularly if those locations have licensing requirements for visitors. It’s also handy for centralising socio-legal research, since the amount of times I’ve had to travel to London then visit 4 or 5 locations in a day or so is a lot of wasted travel, when one could just schedule all the meetings at a purpose-made location. Booking venues can be a roll of the dice, as a bad venue can negatively impact a stakeholder meeting months in the making.
The spaces I mostly use for the more minor policy meetings in London are difficult to book and lack easy access which is something LISA seems to solve. If the registration postcode for LISA is the same as its physical location, then the location is also quite good for the policy end of things.
That is if it is possible for researchers to have visitors on site, and I’m not sure what the rules would be for that, but for those in frontline AI policymaking that would be a huge advantage, and would help attract that kind of specialism.
The talks in the events section also seem really good, with nice variation.
One piece of mixed feedback is that it’s good that it’s not just technical focused and that there’s a 70⁄30 split technical and non-technical, but if you actually want to achieve the policy-maker reach you mention in the post it may make sense to expand that 30% a bit given that socio-legal and political researchers are a vital piece of actually pitching technical findings in a realistic way. The 70⁄30 split is still good and I can see the reason for it, but hopefully it’ll be a flexible goal rather than a rigid limit. It also seems like there’s an effort made to network people between specialisms via the co-working space which is a really great touch, and something that most other spaces struggle to do effectively.
All in all looks great, and quite excited to see what comes out of it!
This is a really interesting take, and I agree with many elements. There is one element I want to explore more, and one I’d like to contest.
Firstly, I find a lot of the acceleration vs deceleration debate to be mostly theoretical and academic—not unlike debating whether or not it is better to have tides or to stop them and have a still ocean. At the end of the day (four times a day in most places, if we’re being pedantic) the tide is still going to do its thing. It’s the same with technical progress. Could you make it harder to innovate and improve technology? Yes. But realistically speaking having a pause or freeze of status quo in anything approaching an effective manner is just not possible. It’s the same issue I had with signing an open letter declaring a freeze. You can get everyone in the nation to sign an open letter saying “Don’t commit crimes”, but that isn’t going to solve the crime problem. But that’s a bit of a tangent and I don’t want to hijack your post nor your comments with unrelated debate.
Secondly, I think the nuclear and AI debates are quite poor comparisons. Much of this is anecdotal, having worked in both industries in a regulation role. Firstly, the very high levels of anti-nuclear campaigning and risk aversion have resulted in nuclear energy being a very heavily (and effectively) regulated industry. If it was not for the amount of anti-nuclear sentiment, I don’t think we’d have that level of security today. I think that’s partly what makes it so safe. I agree when you discuss the risk tradeoffs between coal and nuclear that it’s not as clear-cut as may be imagined, but I don’t think it supports the core argument very well. Also, nuclear energy and AI are such different industries to undertake risk reduction in—mostly because of the leverages of control you have through licensing, resources, and capital. However, this may be because of the aforementioned lobbying resulting in very burdensome regulation and perhaps AI will be similarly easy to regulate in future.
It’s also very possible that I’m misinterpreting your point, so please do let me know if that’s the case.Ultimately I agree with your core point that this is a fallacy seen in much AI Safety reasoning, and that even stopping now would be shutting the stable door after the horse has bolted, but I think that there is a middle ground where speed of improvement and slower safeguards is a good way to lessen risk. I actually think nuclear energy is a good example of this, rather than a poor one.
I would like to see how many people read to the end. I have an 18 min read post, and I can see that many people read for 1m 30s so probably read the tl;dr or summary, but it does tell me that a number of people did read for much longer. I’d like to know how many read it to the end so I can tell if maybe posts need to be much shorter.
May not be possible, but wishlist idea would be to know a bit more about who these users are in terms of interest demographic. If my post is AI Safety but most of those who click away fast have animal welfare listed as their primary interest then that’s not an issue. But if my post is AI Safety and lots of AI Safety people didn’t read or engage, then that’s an issue I need to fix. This may not be feasible to implement however.
All in all I like it, and it feels like it gives insight into developing better engagement habits rather than encouraging clickbait.
I agree, and I think part of the problem is giving a false impression of what kind of training or experience is most useful. You mention this in the over-reliance on EA resources which I think is a major problem. This is especially an issue when someone applies for AI Safety related jobs outside of the EA sphere—because they’re competing with people with a much wider range of experience.
I’ve always felt there should be a ‘useful non-EA courses/resources’ guide.
No worries, there was always a chance I was misinterpreting the claim in that section. Happy for us to skip that.
For my second section I was talking more about stasis in the more full sense ie a pause in innovation in certain areas. Some are asking for full stasis for a period of time in the name of safety, others for a slow-down. I agree that safe stasis is a fallacy for the reasons I outlined, and agree with most of your points—particularly everything being a risk-risk tradeoff. I’m not entirely sold on the plausability of slowdowns or pauses from a logistical deployment perspective, which is where I think I got bogged down in the reeds in my response there.
I am recently returning to EA after a year or two away largely due to disillusionment, coupled with the desire to pursue core EA principles causing me to (tentatively, in a limited manner) return.
It was honestly a hugely beneficial thing for me. I got to see how lots of other places ‘do good’ and helped identify some of the ways EA misses the mark—as well as what EA gets right that other places don’t.
The biggest message I think should be emphasised is “its okay to stray”. In fact it’s healthy.
I would couple what @aogara said below, in that it helps hugely to see how the sausage is made.
Perhaps having a forum option to add tags to your profile, if you’re interested in this data being collected?
For the reading to end, yes that would be fantastic. Obviously it can’t tell what people were looking at when they left, but based on average reading time it would help pinpoint just how long certain topics should be.
Great to see the stuff you guys are up to with the forum though!
Compliance Monitoring as an Impactful Mechanism of AI Safety Policy
That’s a good regulatory mechanism, and isn’t unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn’t a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries.
I’d really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.
What happened in 2022, out of interest? Anyone know?
I’m not entirely sure why you’re being karma-bombed for this. I’ve done what I can to bring your score up towards 0.
Agree or disagree, I don’t think your comment breaks any rules or norms, and was well written with reasoning provided. Don’t take the weird scoring to heart.
This seems like a really exciting fellowship, and I’ll make sure to recommend it to some of the law students I interact with. Will a compendium of outputs be released? I know some orgs who would be interested in what these projects throw up.
This is a really good post, and something that is both complex and important. Thank you for taking the time to make it.
I think from some of the language used that this is a US-centric lean. I’m UK-based, but given the mutual common law system I think there’s some common ground to comment from. Given the jurisdictional difference none of these are criticisms or feedback, but are comparative explorations of your post with our own circumstance.
One thing I think is key from a liability perspective is the lack of IP protections for algorithms used as evidence. It may not be the case in the US, but in England & Wales there are very few (if any) intellectual property protections for algorithms once they are out in the open. This makes developers and retailers absurdly reluctant to disclose algorithms in court even if they are in the right. I think that is a niggle when it comes to using liability as a lever, and something that needs sorted.
The UK also doesn’t really have much of a punitive system in most cases, and it’s almost entirely based on “putting the individual back where they were”, with the exception of violated safety standards etc. I think using tort law is still feasible though, if we can address the IP issue and link harms more to ECHR, data law, and similar.
I don’t think there needs to be a worry about imminent existential risk to use tort law as a lever. Nuclear regulation has existential risk, but the safety standards still have plenty of punitive element there, as well as a tort law aspect. It links in with your strict liability point, but I do wonder how we would define ‘unpredictable and uncontrollable’ in terms of AI systems—though likely that is one for the guidance notes.
I do think there would be merit in compulsory insurance, and a portion of that insurance paying into a pot from which significant uninsured harms are compensated from. The UK does that with motor insurance, but other countries likely have similar systems.
Really enjoyed this post, and hope to see more from you soon.
No, no additional clarity needed at all—it was obvious. I just didn’t want it to come off like I was criticising rather than saying how this could work on our side of the pond :) I’ll be sure to give that a read tonight!
““If you are cofounding an organization, have an agreement about what happens if you have irreconcilable disagreements with your cofounders. Every single startup advice book tells you to do this, and nobody does it because they think they are special, but you aren’t special. Even if your cofounder is your best friend and you are perfectly value-aligned, you should still have an agreement about handling irreconcilable disagreements.””
Coming from a legal background, this is the source of so much frustration. If you’re best friends you need the agreement even more, because it allows the friendship to survive a major disagreement by having procedures. Not having an agreement like this turns a multi-hour mediation session into a multi-year court battle.
If your friend baulks at making such an agreement, it doesn’t bode well for handling other uncomfortable conversations.
Concerning the rest of the post, I’ve been fairly flabbergasted how many orgs with so much funding have almost no standardised internal policies and procedures. Hire a lawyer for a few weeks guys. It’s much less expensive than a court case, where you’ll be needing them for years.
This is a very interesting read. I have some feedback (mostly neutral) to help further shape these ideas. Mostly just to prompt and direct some interest extra thought directions. It’s not a list of criticisms, but myself just spitballing ideas on top of your foundations:
A product that carries large negative externalities
You mention aerospace and nuclear as good demonstrations of regulations and to an extent I agree, but a potential weakness here is the strength of regulation comes a lot from the fact that these developers are often either monopsonies or close. That is, there are very few builders of these systems and they have either very few or singular customers, as well as access to top-tier legal talent. AI development is much more diverse, and I think this makes regulation harder. Not saying your idea on this element is bad—it’s very good—but it’s something to bear in mind. This would be an interesting governance category to maybe split into subcategories.
Innovation policy
This is a good idea again, and the food-for-thought relates to the above quite closely too again with the nuclear mention. One thing I’d look into is positive influence on procurement—make it more rewarding for an organisation to buy (or make) safer AI than the financial reward is for not doing that. Policing in England and Wales is experiencing a subtle shift like this right now which has actually been very impactful.
A national security risk
Obviously it’s hard to get detailed in an overview post, but WMDs are regulated in a specific way which doesn’t necessarily marry well to NatSec. There’s some great research right now on how NatSec related algorithms and transparency threats are beginning to be regulated, with some recent trials of regulations.
Preventing competitive dynamics
Not much to say here, as this is outside my expertise area. I’ll leave that for others.
As an instrument of great power conflict
This was an interesting point. One thing I’d highlight is though most of my work is in AI regulation, I’ve done a bunch of Space regulation too and a thing to bear in mind is that space law has aged horribly and is stagnant. One of the main issues is that it was written when there were three space powers (mainly US and USSR, with UK as a US-aligned third space power), and the regulation was written with the idea of a major tech bottleneck to space and the ability for two nations to ‘police’ it all. This is more so true of the Outer Space Treaty than some of the treaties that followed and built on it. Obviously the modern day bears very little resemblance to that which makes things more difficult eg private entities exploring space, modern technology allowing autonomy. Worth thinking about how we would avoid this when we don’t know what the world will look like in 10, 25, 50 years.
Improving consumer welfare
This was a great point. Not much further direction to add here, other than looking at how some successful laws have been future-proofed against AI changes vs how some have been less successful.
Political economy
I’ve done a bunch of work in this area, but haven’t really got anything to add beyond what you’ve put.
Military Technology
One of the major bottlenecks with this category is that most new projects happen far behind closed doors for decades. EA currently lacks much of a MilTech presence, which is a missed opportunity IMO.
All in all this is an interesting way to group AI governance areas, but I think some additional attention could be paid to how the markets behave there and how that affects regulation. Perhaps an extra category for monopsonies or large market suppliers at opposite ends of a spectrum?