Many talented lawyers do not contribute to AI Safety, simply because they’ve never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans[1]:
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!
If you join, you’ll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
You’ll also get the chance to put your own questions to experienced attorneys.
Feel free to DM me if you want to raise any queries!
^
NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording.
I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.
This sounds valuable! Quick question about participation: I’m an EA-aligned lawyer concerned about AI safety, though not currently at a top firm or working directly in AI regulation. Would someone with general legal expertise and strong motivation to contribute to AI safety be useful for this, or are you specifically looking for lawyers already working in tech/AI policy?
I imagine fresh perspectives from lawyers outside the usual AI circles could be valuable for spotting overlooked risks, but wanted to check if that fits what you’re envisioning.
Apparently emojis don’t render properly on Firefox. I didn’t see any emojis so I tried opening this page on Chrome and indeed they are there, but they don’t show up in my normal browser.
One thing is that emojis are pretty rare on the Forum (despite being popular in places like LinkedIn and some slacks), so they sometimes make things appear more salesy/ even LLM generated. In my opinion, your text itself doesn’t seem to salesy or overly persuasive.
Many talented lawyers do not contribute to AI Safety, simply because they’ve never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans[1]:
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!
If you join, you’ll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
You’ll also get the chance to put your own questions to experienced attorneys.
📅 25–26 October
🌍 Hybrid: online + in-person (London)
If you’re up for it, sign up here: https://luma.com/8hv5n7t0
Feel free to DM me if you want to raise any queries!
^
NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording.
I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.
This sounds valuable! Quick question about participation: I’m an EA-aligned lawyer concerned about AI safety, though not currently at a top firm or working directly in AI regulation. Would someone with general legal expertise and strong motivation to contribute to AI safety be useful for this, or are you specifically looking for lawyers already working in tech/AI policy?
I imagine fresh perspectives from lawyers outside the usual AI circles could be valuable for spotting overlooked risks, but wanted to check if that fits what you’re envisioning.
Apparently emojis don’t render properly on Firefox. I didn’t see any emojis so I tried opening this page on Chrome and indeed they are there, but they don’t show up in my normal browser.
Of course! We’d love to have you there!
One thing is that emojis are pretty rare on the Forum (despite being popular in places like LinkedIn and some slacks), so they sometimes make things appear more salesy/ even LLM generated.
In my opinion, your text itself doesn’t seem to salesy or overly persuasive.