I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
Richard Ngo just gave a talk at EAG berlin about errors in AI governance. One being a lack of concrete policy suggestions.
Matt Yglesias said this a year ago. He was even the main speaker at EAG DC https://www.slowboring.com/p/at-last-an-ai-existential-risk-policy?utm_source=%2Fsearch%2Fai&utm_medium=reader2
Seems worth asking why we didn’t listen to top policy writers when they warned that we didn’t have good proposals.
What do you think of Thomas Larson’s bill? It seems pretty concrete to me, do you just think it is not good?
I am going on what Ngo said. So I guess, what does he think of it?
This sounds like the sort of question you should email Richard to ask before you make blanket accusations.
Ehhh, not really. I think it’s not a crazy view to hold and I wrote it on a shortform.
I think 90% of the answer to this is risk aversion from funders, especially LTFF and OpenPhil, see here. As such many things struggled for funding, see here.
We should acknowledge that doing good policy research often involves actually talking to and networking with policy people. It involves running think tanks and publishing policy reports, not just running academic institutions and publishing papers. You cannot do this kind of research well in a vacuum.
That fact combined with funders who were (and maybe still are) somewhat against funding people (except for people they knew extremely well) to network with policy makers in any way, has lead to (maybe is still leading to) very limited policy research and development happening.
I am sure others could justify this risk averse approach, and there are totally benefits to being risk averse. However in my view this was a mistake (and is maybe an ongoing mistake). I think was driven by the fact that funders were/are: A] not policy people, so do/did not understand the space so are were hesitant to make grants; B] heavily US centric, so do/did not understand the non-US policy space; and C] heavily capacity constrained, so do/did not have time to correct for A or B.
– –
(P.S. I would also note that I am very cautious about saying there is “a lack of concrete policy suggestions” or at least be clear what is meant by this. This phrase is used as one of the reasons for not funding policy engagement and saying we should spend a few more years just doing high level academic work before ever engaging with policy makers. I think this is just wrong. We have more than enough policy suggestions to get started and we will never get very very good policy design unless we get started and interact with the policy world.)
My current model is that actually very few people who went to DC and did “AI Policy work” chose a career that was well-suited to proposing policies that help with existential risk from AI. In-general people tried to choose more of a path of “try to be helpful to the US government” and “become influential in the AI-adjacent parts of the US government”, but there are almost no people working in DC whose actual job it is to think about the intersection of AI policy and existential risk. Mostly just people whose job it is to “become influential in the US government so that later they can steer the AI existential risk conversation in a better way”.
I find this very sad and consider it one of our worst mistakes, though I am also not confident in that model, and am curious whether people have alternative models.
That’s probably true because it’s not like jobs like that just happen to exist within government (unfortunately), and it’s hard to create your own role descriptions (especially with something so unusual) if you’re not already at the top.
That said, I think the strategy you describe EAs to have been doing can be impactful? For instance, now that AI risk has gone mainstream, some groups in government are starting to work on AI policy more directly, and if you’re already working on something kind of related and have a bunch of contacts and so on, you’re well-positioned to get into these groups and even get a leading role.
What’s challenging is that you need to make career decisions very autonomously and have a detailed understanding of AI risk and related levers to carve out your own valuable policy work at some point down the line (and not be complacent with “down the line never comes until it’s too late”). I could imagine that there are many EA-minded individuals who went into DC jobs or UK policy jobs with the intent to have an impact on AI later, but they’re unlikely to do much with that because they’re not proactive enough and not “in the weeds” enough with thinking about “what needs to happen, concretely, to avert an AI catastrophe?.”
Even so, I think I know several DC EAs who are exceptionally competent and super tuned in and who’ll likely do impactful work down the line, or are already about to do such things. (And I’m not even particularly connected to that sphere, DC/policy, so there are probably many more really cool EAs/EA-minded folks there that I’ve never talked to or read about.)
The slide Nathan is referring to. “We didn’t listen” feels a little strong; lots of people were working on policy detail or calling for it, it just seems ex post like it didn’t get sufficient attention. I agree directionally though, and Richard’s guesses at the causes (expecting fast take-off + business-as-usual politics) seem reasonable to me.
Also, *EAGxBerlin.