Communicating by keeping human rights at the centre of AI Policy discussion is extremely underappreciated.
For e.g., the UN Human Rights chief in 2021 called for a moratorium on the sale and use of artificial intelligence (AI) systems until adequate safeguards are put in place.
Respect for human rights is a well-established central norm; leverage it
Heramb Podar
Great post!
Do note that given the context and background, a lot of your peers are probably going to be nudged towards charitable ideas. I would want to be generally mindful that you are doing things that have counterfactual impacts while also taking into account the value of your own time and potential to do good.
I encourage you to also be cognizant of not epistemically taking over other people’s world models with something like “AI is going to kill us all”—I think an uncomfortable amount of space inadvertently and unknowingly does this and is one of the key reasons why I never started an EA group at my university.
Also, here is a link if anyone wants to read more on the China AI registry which seems to be based on the model cards paper
Nice summarization! I generally see model registries as a good tool to ensure deployment safety by logging versions of algorithms and tracking spikes in capabilities. I think a feasible way to push this into the current discourse is by setting it in the current algorithmic transparency agenda.
Potential risks here include who decides what is a new version of a given model. If the nomenclature is left in the hands of companies, it is prone to be misused. Also, the EU AI Act seems to take a risk-based approach, with the different kinds of risks being more or less lines in the sand.
Another important point is what we do with the information we gather from these sources—I think there are “softer”(safety assessments, incident reporting) and “harder”(bans, disabling) ways to go about this. It seems likely to me that governments are going to want to lean into the softer bucket to enable innovation and have some due process kick in. This is probably more true with the US which has always favoured sector-specific regulation.
It’s frankly quite concerning that usually technical specifications are only worked on by Working Groups after high-level qualitative goals are set by policymakers- seems to open a can of worms for different interpretations and safety washing.
After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).
Governments tend to move slowly, with due process, and in small increments(think, “We are going to first maybe do some risk monitoring, only then auditing”). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers’ agenda and the Overton window.
Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.
I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material.
So, some high-level suggestions based on my interactions with other people I have are:
Being more explicit about this in 80K hours calls or talking about the funding bar (potentially somehow with grantmakers/ intro’ing to successful candidates who do independent stuff). Maybe organisations could explicitly state this in their fellowship/intern/job applications: “Only 10 out of 300 last year got selected” so that people don’t over-rely on some applications.
There is a very obvious point that Community Builders can only do so much because their general job is to point resources out and set initial things rolling. I think that as community builders, being vocal about this from an early point is important. This could look like, “Hey, I only know as much as you do now that you have read AGI SF and Superintelligence.” Community builders could also try connecting with slightly more senior people and doing intros on a selective basis(e.g., I know a few good community builders who try to go out of their way to an EAGx to score convos with such people).
I think metrics for 80K, and CBs need to be more heavily weighted towards(if not already) “X went on to do an internship and publish a paper” and away from “this guy read superintelligence and did a fun hackathon”. The latter also creates weird sub-incentives for community members to score brownie points with CBs and make a lot of movement with little impactful progress.
Talking about creating your own opportunities seems really untalked about in EA circles- there is a lot of talk about finding opportunities and overwhelming newcomers with EA org webpages, which, coupled with neglectedness, causes them to overestimate the opportunities. Maybe there could be a guide for this, some sort of a group/support for this?
For early career folks, maybe there could be some sort of a peer buddy system where people who are a little bit further down the road can get matched and collaborate/talk. A lot of these conversations involve safe spaces, building trust and talking about really sensitive issues(like finances, runway planning and critical feedback on applications). I have been lucky to build such a circle within EA, but I recognize that’s only because of certain opportunities I got early on, along with being comfortable with reaching out to people, something which not necessarily everyone is.
We need to identify more proactive people who already have a track record of social impact/being driven by certain kinds of research instead of just high-potential people- these are probably the only people who will actually convert to returns for the movement(very crudely speaking). This is even more true in non-EA hubs where good connections aren’t just one local meetup away as with NYC or Oxford. I think there is a higher attrition rate of high-potential people in LMICs, at least partly due to this.
I find the Biden chip export controls a step in the right direction, and it also made me update my world model of compute governance being an impactful lever. However, I am concerned that our goals aren’t aligned with theirs; US policymakers’ incentive right now is to curb China’s tech growth and fun trade war reasons, not pause AI.
This optimization for different incentives is probably going to create some split between US policymakers and AI safety folks as time goes on.
It also makes China more likely to treat this as a tech race which sets up interesting competitive race dynamics between the US and China which I don’t see talked about enough.
We as a movement do a terrible job of communicating just how hard it might be to get a job in AI Safety and honestly cause people to anchor/rely too much on EA resources which sets unrealistic expectations and also not being fair
Nice synthesis! Going forward, I think it’s going to be important to see how other major players in the global market, like the EU, Japan, and South Korea, respond to the U.S.-China semiconductor rivalry.
I’m curious about what technological, economic, and logistical challenges China must overcome to create a self-reliant semiconductor industry? I think something we forget from an AI risk point of view is how much algorithmic efficiency or other tech breakthroughs(eg, emerging technologies that might reduce dependence on traditional semiconductor manufacturing processes) might make the question of chips redundant.
There is almost this feedback loop where geopolitics is fueling this chip trade war which forces everyone to become more advanced and makes states pick sides which fuels tech advancement which goes into capabilities and then concerns which bring about more trade wars.
Nice summarization post!
On the point of non-EA funders coming into the space, it’s important to consider messaging -we don’t want to come off as being alarmist, overly patronizing or too sure certain of ourselves, but rather as a constructive dialogue that builds some shared understanding of the stakes involved.
There also needs to be incentive alignment, which in the short-term might also mean collaborating with people on things that aren’t directly X-risk related, like promoting ethical AI, enhancing transparency in AI development, etc.
This is cool! Good luck on the program
This post is so valuable; I remember flinching and trying to “save” my call for multiple months until a friend at an EA fellowship literally told me, ” You do know that they give you the stuff to prep with if you are accepted, right?”—I applied the very same night and probably thought about some aspect of my call nearly every other week of my summer intern.
What are the biggest bottlenecks and/or inefficiencies that impedes 80K from having more impact?
I have seen way too many people not wanting to apply to 80K hours calls because they aren’t EAs or won’t to work in x-risk areas. It almost seems like the message is “80K is an EA-aligned only service.”
How is the team approaching this (changes in messaging, for eg?)
How much would you want people weight 80k hours calls into their overall decision-making? (approximate ranges or examples is fine)
How often do you direct someone away from AI Safety to work on something else (say global health and development)?
What kind of criteria or plans do you look for in people who are junior in the AI governance field and looking for independent research grants? Is this a kind of application you would want to see more of?
Really nice post, I think that a lot of EA doesn’t appeal to people from all backgrounds(especially global south countries)at the same level of enthusiasm which is a real shame given that a lot of good which can be done in the world right now in the most cost-effective way is in those very backgrounds
I see way too many people confusing movement with progress in the policy space.
There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective—lots of legal leeway for companies to exploit.