Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn’t a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist’s curse”, “information hazard”, all (as far as I know) tracing back to an FHI publication.
It’s also worth remarking on the areas of study that FHI effectively incubated, and which are now full-blown fields of research:
The ‘Governance of AI Program’ was launched in 2017, to study questions around policy and advanced AI, beyond the narrowly technical questions. That project was spun out of FHI to become the Centre for the Governance of AI. As far as I understand, it was the first serious research effort on what’s now called ”AI governance”.
From roughly 2019 onwards, the working group on biological risks seems to have been fairly instrumental in making the case for biological risk reduction as a global priority, specifically because of engineered pandemics.
If research on digital minds (and their implications) grows to become something resembling a ‘field’, then the small team and working groups on digital minds can make a claim to precedence, as well as early and morerecent published work.
FHI was staggeringly influential; more than many realise.
I’m awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.
I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:
The concept of existential risk, and arguments for treating x-risk reduction as a global priority (see: The Precipice)
Arguments for x-risk from AI, and other philosophical considerations around superintelligent AI (see: Superintelligence)
Arguments for the scope and importance of humanity’s long-term future (since called longtermism)
Information hazards
Observer selection effects and ‘anthropic shadow’
Bounding natural extinction rates with statistical methods
The vulnerable world hypothesis
Moral trade
Crucial considerations
The unilteralist’s curse
Dissolving the Fermi paradox
The reversal test in applied ethics
‘Comprehensive AI services’ as an alternative to unipolar outcomes
The concept of existential hope
Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn’t a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist’s curse”, “information hazard”, all (as far as I know) tracing back to an FHI publication.
It’s also worth remarking on the areas of study that FHI effectively incubated, and which are now full-blown fields of research:
The ‘Governance of AI Program’ was launched in 2017, to study questions around policy and advanced AI, beyond the narrowly technical questions. That project was spun out of FHI to become the Centre for the Governance of AI. As far as I understand, it was the first serious research effort on what’s now called ”AI governance”.
From roughly 2019 onwards, the working group on biological risks seems to have been fairly instrumental in making the case for biological risk reduction as a global priority, specifically because of engineered pandemics.
If research on digital minds (and their implications) grows to become something resembling a ‘field’, then the small team and working groups on digital minds can make a claim to precedence, as well as early and more recent published work.
FHI was staggeringly influential; more than many realise.
Edit: I wrote some longer reflections on FHI here.
I’m awestruck, that is an incredible track record. Thanks for taking the time to write this out.
These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.
What a champ. if institutions can be heroes, FHI is surely one.