Yes, for instance, as mentioned in the appendix, some non-fictitious examples for Global Health and Development are:
We produced numerous research reports for Open Phil assessing the potential of global health and development interventions, looking for interventions that could be as or more cost-effective as the ones currently ranked top by GiveWell. This included full reports on the following:
The badness of a year of life lost vs. a year of severe depression.
Scientific research capacity in sub-Saharan Africa.
The landscape of climate change philanthropy.
Energy frontier growth (this report explores several of the key considerations for quantifying the potential economic growth benefits of clean energy R&D).
Funding gaps and bottlenecks to the deployment of carbon capture, utilization, and storage technologies.
A literature review on damage functions of integrated assessment models in climate change.
A confidential project that we wonāt give further details on.
Detailing the process of the World Health Organizationsās prequalification process for medicines, vaccines, diagnostics and vector control, as well as the potential impact of additional funding in this area.
Describing the World Health Organizationās Essential Medicines List and the potential impact of additional funding in this area.
Whether Open Phil should make a major set of grants to establish better weather forecasting data availability in low- and middle-income countries (LMICs).
Further examination of hypertension, including its scale and plausible areas a philanthropist could make a difference.
And for AI Governance and Strategy respectively, some examples could include the following:
Ongoing projects include the following: (Note: this list isnāt comprehensive and some of these will soon result in public outputs.)
Developing whatās intended to be a comprehensive database of AI policy proposals that could be implemented by the US government in the near- or medium-term. This database is intended to capture information on these proposalsā expected impacts, their levels of consensus within longtermist circles, and how they could be implemented.
Planning another Long-term AI Strategy Retreat for 2023, and potentially some smaller AI strategy events.
Thinking about what the leadup to transformative AI will look like, and how to generate economic and policy implications from technical peopleās expectations of AI capabilities growth.
Mentoring AI strategy projects by promising people outside of our team who are interested in testing and building their fit for AI governance and strategy work.
Preparing a report on the character of AI diffusion: how fast and by what mechanisms AI technologies spread, what strategic implications that has (e.g. for AI race dynamics), and what interventions could be pursued to influence diffusion.
Surveying experts on intermediate goals for AI governance.
Investigating the tractability of bringing about international agreements to promote AI safety and the best means of doing so, focusing particularly on agreements that include both the US and China.
Investigating possible mechanisms for monitoring and restricting possession or use of AI-relevant chips.
Assessing the potential value of an AI safety bounty program, which would reward people who identify safety issues in a specified AI system.
Writing a report on āDefense in Depth against Catastrophic AI Incidents,ā which makes a case for mainstream corporate and policy actors to care about safety/āsecurity-related AI risks, and lays out a ātoolkitā of 15-20 interventions that they can use to improve the design, security, and governance of high-stakes AI systems.
Experimenting with using expert networks for EA-aligned research.
Trying to create/āimprove pipelines for causing mainstream think tanks to do valuable longtermism-aligned research projects, e.g. via identifying and scoping fitting research projects.
Thanks, and sorry for not having checked the appendix!
It looks like it would be quite valuable to publish that research, even if just as posts which would contain the summary and link to the relevant report, to save time. This would not be possible for the ones containing information hazards, but I hope there will not be many under such conditions.
Thanks for your engagement!
Yes, for instance, as mentioned in the appendix, some non-fictitious examples for Global Health and Development are:
And for AI Governance and Strategy respectively, some examples could include the following:
Thanks, and sorry for not having checked the appendix!
It looks like it would be quite valuable to publish that research, even if just as posts which would contain the summary and link to the relevant report, to save time. This would not be possible for the ones containing information hazards, but I hope there will not be many under such conditions.