Is Britain prepared for the challenges ahead? We face significant risks, from climate change to pandemics, to digital transformation and geopolitical tensions. We need social-democratic answers to create a fair and resilient future.
Our vision A leading role for the UK Many long-term issues have an important political dimension in which the UK can play a leading role. Building on the work of previous Labour governments, we see a future where the UK can play a larger role in areas such as in reducing international tensions and in becoming a world leader in green technology.
EffiSciences is a collective of students founded in the Écoles Normales Supérieures (ENS) acting for more involved research in the face of the problems of our world. [translated from French]
“At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems.”
Hi, we are the Confido Institute and we believe in a world where decision makers (even outside the EA-rationalist bubble) can make important decisions with less overconfidence and more awareness of the uncertainties involved. We believe that almost all strategic decision-makers (or their advisors) can understand and use forecasting, quantified uncertainty and public forecasting platforms as valuable resources for making better and more informed decisions.
We design tools, workshops and materials to support this mission. This is the first in a series of multiple EA Forum posts. We will tell you more about our mission and our other projects in future articles.
In this post, we are pleased to announce that we have just released the Confido app, a web-based tool for tracking and sharing probabilistic predictions and estimates. You can use it in strategic decision making when you want a probabilistic estimate on a topic from different stakeholders, in meetings to avoid anchoring, to organize forecasting tournaments, or in calibration workshops and lectures. We offer very high data privacy, so it is used also in government setting. See our demo or request your Confido workspace for free.
The current version of Confido is already used by several organizations, including the Dutch government, several policy think tanks and EA organizations.
Confido is under active development and there is a lot more to come. We’d love to hear your feedback and feature requests. To see news, follow us on Twitter, Facebook or LinkedIn or collaborate with us on Discord. We are also looking for funding. [emphasis added]
We are announcing a new organization called Epistea. Epistea supports projects in the space of existential security, epistemics, rationality, and effective altruism. Some projects we initiate and run ourselves, and some projects we support by providing infrastructure, know-how, staff, operations, or fiscal sponsorship.
Our current projects are FIXED POINT, Prague Fall Season, and the Epistea Residency Program. We support ACS (Alignment of Complex Systems Research Group), PIBBSS (Principles of Intelligent Behavior in Biological and Social Systems), and HAAISS (Human Aligned AI Summer School).
SaferAI is developing the technology that will allow to audit and mitigate potential harms from general-purpose AI systems such as large language models.
TLDR; The European Network for AI Safety is a central point for connecting researchers and community organizers in Europe with opportunities and events happening in their vicinity. Sign up here to become a member of the network, and join our launch event on Wednesday, April 5th from 19:00-20:00 CET!
Our mission is to conduct research and prioritize global catastrophic risks in the Spanish-speaking countries of the world.
There is a growing interest in global catastrophic risk (GCR) research in English-speaking regions, yet this area remains neglected elsewhere. We want to address this deficit by identifying initiatives to enhance the public management of GCR in Spanish-speaking countries. In the short term, we will write reports about the initiatives we consider most promising. [Quote from Introducing the new Riesgos Catastróficos Globales team]
The International Center for Future Generations is a European think-and-do-tank for improving societal resilience in relation to exponential technologies and existential risks.
As of today, their website lists their priorities as:
an Australian-based organisations dedicated to developing high-quality and detailed policy proposals for the greatest challenges of the 21st century. [source]
We are an incubator for new governance models for transformative technology.
Our goal: To overcome the transformative technology trilemma.
Existing tech governance approaches fall prey to the transformative technology trilemma. They assume significant trade-offs between progress, participation, and safety.
Market-forward builders tend to sacrifice safety for progress; risk-averse technocrats tend to sacrifice participation for safety; participation-centered democrats tend to sacrifice progress for participation.
Collective flourishing requires all three. We need CI R&D so we can simultaneously advance technological capabilities, prevent disproportionate risks, and enable individual and collective self-determination.
Cavendish Labs is a 501(c)(3) nonprofit research organization dedicated to solving the most important and neglected scientific problems of our age.
We’re founding a research community in Cavendish, Vermont that’s focused primarily on AI safety and pandemic prevention, although we’re interested in all avenues of effective research.
[...] our team is pursuing a two-pronged strategy. One is foundational, aimed at filling in the gaps in the science of forecasting that represent critical barriers to some of the most important uses of forecasting—like how to handle low probability events, long-run and unobservable outcomes, or complex topics that cannot be captured in a single forecast. The other prong is translational, focused on adapting forecasting methods to practical purposes: increasing the decision-relevance of questions, using forecasting to map important disagreements, and identifying the contexts in which forecasting will be most useful.
[...] Our core team consists of Phil Tetlock, Michael Page, Josh Rosenberg, Ezra Karger, Tegan McCaslin, and Zachary Jacobs. We also work with various contractors and external collaborators in the forecasting space.
Our purpose is to spread Effective Altruist, longtermist, and rationalist values and ideas as much as possible to the general public by leveraging new media. We aim to reach our goal through the creation of high-quality material posted on an ecosystem of YouTube channels, profiles on social media platforms, podcasts, and SoT’s website.
Also anything in Alignment Org Cheat Sheet that’s not in here. And maybe adding that post’s 1-sentence descriptions to the info this database has on each org listed in that post.
(Reminder: This is a database of orgs relevant to longtermist/x-risk work, and includes some orgs that are not part of the longtermist/x-risk-reduction community, don’t associate with those labels, and/or don’t focus specifically on those issues.)
To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).
Some orgs that should maybe be added (I’d be keen for someone to fill in the form to add them, including relevant info on them):
Aligned AI
See https://forum.effectivealtruism.org/posts/emKDqNjyE2h22MJ2T/we-re-aligned-ai-we-re-aiming-to-align-ai
Conjecture
See https://forum.effectivealtruism.org/posts/m5EBkgivRypdqm3zi/we-are-conjecture-a-new-alignment-research-startup
ML Progress research group
See https://www.lesswrong.com/s/T9pBzinPXYB3mxSGi
Cohere?
See https://forum.effectivealtruism.org/posts/DDDyTvuZxoKStm92M/ai-safety-needs-great-engineers , but see also my comment there
Czech Priorities
See https://ceskepriority.cz/o-nas/
Sage
(Not sure if there are public writings on them yet)
Arb
See https://forum.effectivealtruism.org/users/arb
Samotsvety
See https://forum.effectivealtruism.org/posts/KRFXjCqqfGQAYirm5/samotsvety-nuclear-risk-forecasts-march-2022
Epoch
Labour for the Long Term
EffiSciences
Palisade Research
“At Palisade, our mission is to help humanity find the safest possible routes to powerful AI systems aligned with human values. Our current approach is to research offensive AI capabilities to better understand and communicate the threats posed by agentic AI systems.”
Jeffrey Ladish is the Executive Director.
Admond
“Admond is an independent Danish think tank that works to promote the safe and beneficial development of artificial intelligence.”
“Artificial intelligence is going to change Denmark. Our mission is to ensure that this change happens safely and for the benefit of our democracy.”
Senter for Langsiktig Politikk
“A politically independent organisation aimed at creating a better and safer future”
A think tank based in Norway.
Confido Institute
Epistea
Transformative Futures Institute
Led by Ross Gruetzemacher
SaferAI
Orthogonal: A new agent foundations alignment organization
Apart Research
Also the European Network for AI Safety (ENAIS)
Riesgos Catastróficos Globales
International Center for Future Generations
As of today, their website lists their priorities as:
Climate crisis
Technology [including AI] and democracy
Biosecurity
Harvard AI Safety Team (HAIST), MIT AI Alignment (MAIA), and Cambridge Boston Alignment Initiative (CBAI)
These are three distinct but somewhat overlapping field-building initiatives. More info at Update on Harvard AI Safety Team and MIT AI Alignment and at the things that post links to.
Policy Foundry
The Collective Intelligence Project
Also Cavendish Labs:
Also the Forecasting Research Institute
Also School of Thinking
Also Research in Effective Altruism and Political Science (REAPS)
Also AFTER (Action Fund for Technology and Emerging Risk)
Also Future Academy (but maybe that’s not an org and instead a project of EA Sweden?).
Also anything in Alignment Org Cheat Sheet that’s not in here. And maybe adding that post’s 1-sentence descriptions to the info this database has on each org listed in that post.
Also fp21 and maybe Humanity Forward.
(Reminder: This is a database of orgs relevant to longtermist/x-risk work, and includes some orgs that are not part of the longtermist/x-risk-reduction community, don’t associate with those labels, and/or don’t focus specifically on those issues.)
Also Alvea and Nucleic Acid Observatory
Also Apollo Fellowship, Atlas Fellowship, Condor Camp, and
PathfinderSuccessifAlso Apollo Academic Surveys
Also AI Safety Field Building Hub and Center for AI Safety
Also Space Futures Initiative and Center for Space Governance
Also EA Engineers
Also Fund for Alignment Research
Also Institute for Progress
Also Encultured AI
Also Pour Demain
To the best of my knowledge, Samotsvety is a group of forecasters, not an organization (although some of its members have recently launched or will soon launch forecasting-related orgs).