Markus Anderljung On The AI Policy Landscape

Link post

Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI (GovAI) and was previously seconded to the UK government office as a senior policy specialist.

In this episode we discuss recent AI Policy takes, what kind of work GovAI is doing and how you could have an impact in the AI Governance landscape more broadly.

Below are some highlighted quotes from our conversation (available on Youtube, Apple Podcast, Google Podcast and Spotify). For the full context for each of these quotes, you can find the accompanying transcript.


Preparing The World For More Advanced Systems by Reducing Current Harm From AI

“If you’re trying to affect AI labs, what they’re doing, I think a lot of the things that you want to do today looks like… you find cases where you’re dealing with a present day problem or a problem that these companies will start feeling soon. And ways in which these AI systems might be causing harm now. And then you try to figure out how to solve that problem while also preparing the company, and preparing the world for more advanced systems. Part of the reason for that is you’ll just have a larger coalition of actors, and people within the company, and people outside of the company who will be excited and interested in helping out. And I think that’s just going to be quite … A lot of the times it’s going to be the really useful thing to do.”

AI Policy Work Should Be Robust To Worldviews

“People’s credences on the extent to which we’ll have human level machine intelligence or whatever, they differ widely. And it’s going to be difficult. A lot of the times, it’s going to be difficult to push things through. If the only reason is like, “Oh, this will only be helpful if it’s the case that we, in the next 20 years or whatever, develop these very, very powerful systems.” I think in general, you want to be thinking about things that makes sense from multiple perspectives, sort of robust to worldviews.”

With Great Compute Comes Great Responsibility

“Some of the kinds of things to explore there include this thing that we talked about earlier of, could we make it the case that there are maybe levels or something like this, of what amount of compute comes with what amount of responsibility. I think that’s a really promising one and there’s a bunch of ways in which you could try to make that happen. And there’s a bunch of details to think through.”

How To Incentivize External Scrutiny in Big Tech

How can you make it the case that sort of the world’s most powerful or impactful models, that those models receive external scrutiny without that requiring that you give that model to a whole bunch of different actors? Partly just because these companies, you need to make these kinds of governance systems incentive-compatible. And so, I think you’re going to be hard pressed to get Facebook to just give everyone their recommended algorithm or whatever, or their newsfeed algorithm.”

Actors Start Taking Hits On Safety When They Assume Other Players Are Irresponsible

“If they believe that the other actor is going to act really irresponsibly… Even if they develop these very powerful systems, they’re not going to use them for good purposes or whatever, then they’re going to be much more likely to say, “Okay, well it’s worth it for me to take this hit on safety or act less responsibly, or develop my system less carefully, because it’s really important that I make it there first.” That seems like a worrying situation to me.”

No comments.