On a slightly separate point maybe some of the challenge I feel here comes from me having misplaced expectations. I think that before I dived into the longtermist academic research I was hoping that the world looked like this:
Most important qtns.
Ethics
Worldview
Cause
Intervention
Charity
Plan
People solving them:
GPI and FHI, etc
and I could find the answers I needed and get on with driving change – YAY.
But maybe the world actually looks like more this:
Most important qtns.
Ethics
Worldview
Cause
Intervention
Charity
Plan
People solving them:
GPI and FHI
and there is so much more to do – Awww.
(Reminds me of talking to GovAI about policy and they said GovAI does not do applied policy research but people often think that they do it. )
I know it is not going to be top of anyone’s to do list but I would love at some point to see an FHI post like this one from 80K setting out what is in scope and what is out of scope that could be great for others in the ecosystem to do.
(* diagrams oversimplified again but hopefully they make the point)
But maybe the world actually looks like more this:
Most important qtns.
Ethics
Worldview
Cause
Intervention
Charity
Plan
People solving them:
GPI and FHI
and there is so much more to do – Awww.
Is this fair? FHI’s Research seems to me to venture into Cause and Intervention buckets and they seem to be working with government and industry to spur implementation of important policies/interventions that come out of their research? E.g. for each of FHI’s research areas:
Macrostrategy: most recent publication, Bostrom’s Vulnerable World Hypothesis calls for greatly amplified capacities for preventive policing and global governance (Cause)
AI Governance: the research agenda dicusses AI safety as a cause area, and much of the research should lead to interventions. For example the inequality/job displacement section discusses potential governance solutions, the AI race section discusses potential routes for avoiding races / ending those underway (e.g. Third-Party Standards, Verification, Enforcement, and Control), and there is discussion of optimal design of institutions. Apparently researchers are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders.
AI Safety: Apparently FHI collaborates with and advises leading AI research organisations, such as Google DeepMind on building safe AI.
Biosecurity: As well as research on impacts of advanced biotech, FHI regularly advises policymakers including the US President’s Council on Bioethics, the US National Academy of Sciences, the Global Risk Register, the UK Synthetic Biology Leadership Council, as well as serving on the board of DARPA’s SafeGenes programme and directing iGEM’s safety and security system.
Overall it seems to me FHI is spurring change from their research?
You may also find this interesting regarding AI interventions.
Your push back here seems fair. These orgs certainly do some good work across this whole spectrum. My shoddy diagrams were supposed to be more illustrative of a high level point than accurate. But perhaps they are somewhat over-exaggerated and critical. I still think the high-level point about expectations and reality is worth making (like the point about people’s expectations about GovAI).
Part 2 – Also, a note on expectations
On a slightly separate point maybe some of the challenge I feel here comes from me having misplaced expectations. I think that before I dived into the longtermist academic research I was hoping that the world looked like this:
GPI and FHI, etc
and I could find the answers I needed and get on with driving change – YAY.
But maybe the world actually looks like more this:
GPI and FHI
and there is so much more to do – Awww.
(Reminds me of talking to GovAI about policy and they said GovAI does not do applied policy research but people often think that they do it. )
I know it is not going to be top of anyone’s to do list but I would love at some point to see an FHI post like this one from 80K setting out what is in scope and what is out of scope that could be great for others in the ecosystem to do.
(* diagrams oversimplified again but hopefully they make the point)
Is this fair? FHI’s Research seems to me to venture into Cause and Intervention buckets and they seem to be working with government and industry to spur implementation of important policies/interventions that come out of their research? E.g. for each of FHI’s research areas:
Macrostrategy: most recent publication, Bostrom’s Vulnerable World Hypothesis calls for greatly amplified capacities for preventive policing and global governance (Cause)
AI Governance: the research agenda dicusses AI safety as a cause area, and much of the research should lead to interventions. For example the inequality/job displacement section discusses potential governance solutions, the AI race section discusses potential routes for avoiding races / ending those underway (e.g. Third-Party Standards, Verification, Enforcement, and Control), and there is discussion of optimal design of institutions. Apparently researchers are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders.
AI Safety: Apparently FHI collaborates with and advises leading AI research organisations, such as Google DeepMind on building safe AI.
Biosecurity: As well as research on impacts of advanced biotech, FHI regularly advises policymakers including the US President’s Council on Bioethics, the US National Academy of Sciences, the Global Risk Register, the UK Synthetic Biology Leadership Council, as well as serving on the board of DARPA’s SafeGenes programme and directing iGEM’s safety and security system.
Overall it seems to me FHI is spurring change from their research?
You may also find this interesting regarding AI interventions.
Your push back here seems fair. These orgs certainly do some good work across this whole spectrum. My shoddy diagrams were supposed to be more illustrative of a high level point than accurate. But perhaps they are somewhat over-exaggerated and critical. I still think the high-level point about expectations and reality is worth making (like the point about people’s expectations about GovAI).