Towards a longtermist framework for evaluating democracy-related interventions
An examination of Metaculus’ resolved AI predictions and their implications for AI timelines
Types of specification problems in forecasting
Humanities research ideas for longtermists
Data on forecasting accuracy across different time horizons and levels of forecaster experience
Intervention profile: ballot initiatives
Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels?
Deliberation may improve decision-making
Would US and Russian nuclear forces survive a first strike?
My second-best guess is funding any forecasting work, such as Metaculus, but especially the innovative ideas that come out of QURI as one specific puzzle piece that is very likely to be important.
Another second-best guess is that the open-source game theory work of the Center on Long-Term Risk will become important for averting conflict escalations from highly automated forms of governance.
But I think it’s a very narrow slice of all possible futures where AI is powerful enough for open-source game theory to make the right predictions and where yet humans are still sufficiently much around that the funding opportunity is interesting for those people who are now interested in supporting benevolent global governance but are not (already) interested in supporting AI safety.
I’ve been a fan of these organizations for a long time, so I suspect that there’s the availability heuristic at work here, and there are many more excellent funding opportunities out there that I haven’t heard of.
The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you’re interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement.
You can read more in our 2021 review and 2022 plans. We also have significant room for more funding, as we have only started fundraising again last month.
My best guess: Rethink Priorities, specifically the longtermism department. These article titles sound very close to what I’m imagining:
Issues with futarchy
Key characteristics for evaluating future global governance institutions
How does forecast quantity impact forecast quality on Metaculus?
An analysis of Metaculus predictions of future EA resources, 2025 and 2030
Disentangling “improving institutional decision-making”
Towards a longtermist framework for evaluating democracy-related interventions
An examination of Metaculus’ resolved AI predictions and their implications for AI timelines
Types of specification problems in forecasting
Humanities research ideas for longtermists
Data on forecasting accuracy across different time horizons and levels of forecaster experience
Intervention profile: ballot initiatives
Will the Treaty on the Prohibition of Nuclear Weapons affect nuclear deproliferation through legal channels?
Deliberation may improve decision-making
Would US and Russian nuclear forces survive a first strike?
My second-best guess is funding any forecasting work, such as Metaculus, but especially the innovative ideas that come out of QURI as one specific puzzle piece that is very likely to be important.
Another second-best guess is that the open-source game theory work of the Center on Long-Term Risk will become important for averting conflict escalations from highly automated forms of governance.
But I think it’s a very narrow slice of all possible futures where AI is powerful enough for open-source game theory to make the right predictions and where yet humans are still sufficiently much around that the funding opportunity is interesting for those people who are now interested in supporting benevolent global governance but are not (already) interested in supporting AI safety.
I’ve been a fan of these organizations for a long time, so I suspect that there’s the availability heuristic at work here, and there are many more excellent funding opportunities out there that I haven’t heard of.
The Simon Institute for Longterm Governance (SI) is developing the capacity to do a) more practical research on many of the issues you’re interested in and b) the kind of direct engagement necessary to play a role in international affairs. For now, this is with a focus on the UN and related institutions but if growth is sustainable for SI, we think it would be sensible to expand to EU policy engagement.
You can read more in our 2021 review and 2022 plans. We also have significant room for more funding, as we have only started fundraising again last month.
Awesome, thanks! I’ll have a look at the documents.