My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?
The rest of questions are up to other readers.
Thanks very much for all of the work that you’ve done reducing catastrophic risks. Thanks in particular for appearing here to interface with EAs regarding your plans, progress and ideas. GCRI seems like it’s an extremely valuable institution and it’s great of you to give a window into how it all runs. I think that it’s one to watch, and to support for all effective altruists!
My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?
This is a really, really important question. In a sense, it all comes down to this. Otherwise there’s not much point in doing risk analysis.
First, there are risk analysis positions that inform decision making very directly. (I’m speaking here in terms of ‘decisions’ instead of ‘policies’ but you can use these words pretty interchangeably.) These exist in both government and the private sector. However, as a general rule the risks in question are not gcrs—they are smaller risks.
For the gcrs it’s trickier because companies can’t make money off it. I’ve had some funny conversations with people in the insurance industry trying to get them to cover gcrs. I’m pretty sure it just can’t be done. Governments can be much friendlier for gcr, as they don’t need to make it profitable.
My big advice is to get involved in the decision processes as much as possible. GCRI calls this ‘stakeholder engagement’. That is a core part of our integrated assessment, and our work in general. It means getting to know the people involved in the decisions, building relations with them, understanding their motivations and their opportunities for doing things differently, and above all finding ways to build gcr reductions into their decisions in ways that are agreeable to them. I cannot emphasize enough how important it is to listen to the decision makers and try to understand things from their perspective.
For example, if you want to reduce AI risk, then get out there and meet some AI researchers and AI funders and anyone else playing a role in AI development. Then talk to them about what they can do to reduce AI risk, and listen to them about what they are or aren’t willing or able to do.
GCRI has so far done the most stakeholder engagement on nuclear weapons. I’ve been spending time at the United Nations, getting to know the diplomats and activists involved in the issues, and what the issues are from their perspectives. I’m giving talks on nuclear war risk, but much of the best stuff is in private conversations along the way.
At any rate, some of the best ways to reduce risks aren’t what logically follow from the initial risk analysis, but it feeds back into the next analysis. So it’s a two-way conversation. Ultimately I think that’s the best way to go for actually reducing risks.
My last question for now: what do you think is the path from risk-analysis to policy? Some aspiring effective altruists have taken up a range of relevant jobs, for instance working for politicians, in think tanks, in defence and in international governance. Can they play a role in promoting risk-reducing policies? And more generally, how can researchers get their insights implemented?
The rest of questions are up to other readers.
Thanks very much for all of the work that you’ve done reducing catastrophic risks. Thanks in particular for appearing here to interface with EAs regarding your plans, progress and ideas. GCRI seems like it’s an extremely valuable institution and it’s great of you to give a window into how it all runs. I think that it’s one to watch, and to support for all effective altruists!
Thanks Ryan! And thanks again for organizing.
This is a really, really important question. In a sense, it all comes down to this. Otherwise there’s not much point in doing risk analysis.
First, there are risk analysis positions that inform decision making very directly. (I’m speaking here in terms of ‘decisions’ instead of ‘policies’ but you can use these words pretty interchangeably.) These exist in both government and the private sector. However, as a general rule the risks in question are not gcrs—they are smaller risks.
For the gcrs it’s trickier because companies can’t make money off it. I’ve had some funny conversations with people in the insurance industry trying to get them to cover gcrs. I’m pretty sure it just can’t be done. Governments can be much friendlier for gcr, as they don’t need to make it profitable.
My big advice is to get involved in the decision processes as much as possible. GCRI calls this ‘stakeholder engagement’. That is a core part of our integrated assessment, and our work in general. It means getting to know the people involved in the decisions, building relations with them, understanding their motivations and their opportunities for doing things differently, and above all finding ways to build gcr reductions into their decisions in ways that are agreeable to them. I cannot emphasize enough how important it is to listen to the decision makers and try to understand things from their perspective.
For example, if you want to reduce AI risk, then get out there and meet some AI researchers and AI funders and anyone else playing a role in AI development. Then talk to them about what they can do to reduce AI risk, and listen to them about what they are or aren’t willing or able to do.
GCRI has so far done the most stakeholder engagement on nuclear weapons. I’ve been spending time at the United Nations, getting to know the diplomats and activists involved in the issues, and what the issues are from their perspectives. I’m giving talks on nuclear war risk, but much of the best stuff is in private conversations along the way.
At any rate, some of the best ways to reduce risks aren’t what logically follow from the initial risk analysis, but it feeds back into the next analysis. So it’s a two-way conversation. Ultimately I think that’s the best way to go for actually reducing risks.