Otto Barten here, director of the Existential Risk Observatory.
We reduce AI existential risk by informing the public debate. Concretely, we do media work, organize events, do research, and give policy advice.
Currently, public awareness of AI existential risk among the US public is around 15% according to our measurements. Low problem awareness is a major reason why risk-reducing regulation such as SB-1047, or more ambitious federal or global proposals, do not get passed. Why solve a problem one does not see in the first place?
Therefore, we do media work to increase awareness of AI existential risk and propose helpful regulation. Today, we published our fourth piece in TIME Magazine, arguing AI is an existential risk and proposing the Conditional AI Safety Treaty. According to survey-based measurements (n=50 per media item), our ‘conversion rate’, measuring how many readers newly connect AI to human extinction after reading our articles, is between 34% and 50%, of which about half remains over time. We have published four TIME pieces and around 20 other media items in the last two years. Although we cannot cleanly separate media work from other work, we could estimate that $35k should get a funder roughly two leading media pieces, plus 10 supporting ones.
In addition to media work, we also organize events. Our track record contains four debates with leading existential risk voices such as Yoshua Bengio, Stuart Russell, Max Tegmark, and Jaan Tallinn on one hand, and journalists from e.g. TIME and The Economist and MPs on the other. Our events aim to inform leading voices of the societal debate and policymakers about existential risk and give experts the chance to propose helpful policy. We have organized events ahead of the AI Safety Summits in Bletchley Park, Korea/remote, and will do so again in Paris. These events have helped and will help to shape the summits’ narratives towards concern for existential risk. We can organize one event for around $20k, including venue costs, traveling/hotel costs, and organization hours.
We are also doing policy research. In the coming year, we will focus on what the optimal Conditional AI Safety Treaty should look like exactly, and how we can get it implemented. We are uniquely positioned to not only do leading research, but also communicate this directly to a large audience, including e.g. MPs and leading journalists. We are planning to write a paper on what the optimal shape should be for the Conditional AI Safety Treaty, working together with other institutes. We can produce such a paper for around $18k.
As an organization, we are heavily funding constrained. We have been supported by established funders such as SFF, LTFF, and ICFG in the past, but only for relatively modest amounts. Our current runway is therefore about five months. Additional funding would mostly enable us to keep doing what we are doing (and get even better at it!): media work, organizing events, and doing research. Within these three focus areas, we are also open to receiving earmarked funding, or additional funding to scale up our work.
For donations, best to contact us by email. Your support is much appreciated!
Otto Barten here, director of the Existential Risk Observatory.
We reduce AI existential risk by informing the public debate. Concretely, we do media work, organize events, do research, and give policy advice.
Currently, public awareness of AI existential risk among the US public is around 15% according to our measurements. Low problem awareness is a major reason why risk-reducing regulation such as SB-1047, or more ambitious federal or global proposals, do not get passed. Why solve a problem one does not see in the first place?
Therefore, we do media work to increase awareness of AI existential risk and propose helpful regulation. Today, we published our fourth piece in TIME Magazine, arguing AI is an existential risk and proposing the Conditional AI Safety Treaty. According to survey-based measurements (n=50 per media item), our ‘conversion rate’, measuring how many readers newly connect AI to human extinction after reading our articles, is between 34% and 50%, of which about half remains over time. We have published four TIME pieces and around 20 other media items in the last two years. Although we cannot cleanly separate media work from other work, we could estimate that $35k should get a funder roughly two leading media pieces, plus 10 supporting ones.
In addition to media work, we also organize events. Our track record contains four debates with leading existential risk voices such as Yoshua Bengio, Stuart Russell, Max Tegmark, and Jaan Tallinn on one hand, and journalists from e.g. TIME and The Economist and MPs on the other. Our events aim to inform leading voices of the societal debate and policymakers about existential risk and give experts the chance to propose helpful policy. We have organized events ahead of the AI Safety Summits in Bletchley Park, Korea/remote, and will do so again in Paris. These events have helped and will help to shape the summits’ narratives towards concern for existential risk. We can organize one event for around $20k, including venue costs, traveling/hotel costs, and organization hours.
We are also doing policy research. In the coming year, we will focus on what the optimal Conditional AI Safety Treaty should look like exactly, and how we can get it implemented. We are uniquely positioned to not only do leading research, but also communicate this directly to a large audience, including e.g. MPs and leading journalists. We are planning to write a paper on what the optimal shape should be for the Conditional AI Safety Treaty, working together with other institutes. We can produce such a paper for around $18k.
As an organization, we are heavily funding constrained. We have been supported by established funders such as SFF, LTFF, and ICFG in the past, but only for relatively modest amounts. Our current runway is therefore about five months. Additional funding would mostly enable us to keep doing what we are doing (and get even better at it!): media work, organizing events, and doing research. Within these three focus areas, we are also open to receiving earmarked funding, or additional funding to scale up our work.
For donations, best to contact us by email. Your support is much appreciated!