First, we made progress in our core business: informing the public debate. We have published twomore op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).
A change w.r.t. the previous update, is that we see serious, leading journalists become interested in the topic. One leading columnist has already written a column about AI existential risk in a leading newspaper. Another journalist is planning to write a major article about it. This same person proposed having a debate about AI xrisk at the leading debate center, which would be well-positioned to influence yet others, and he proposed to use his network for the purpose. This is definitely not yet a fully-fledged informed societal debate yet, but it does update our expectations in relevant ways:
The idea of op-eds translating into broader media attention is realistic.
That attention is generally constructive, and not derogatory.
Most of the informing takes place in a social, personal context.
From our experience, the process is really to inform leaders of the societal debate, who then inform others. We have for example organized an existential risk drink, where thought leaders, EAs, and journalists could talk to each other, which worked very well. Key figures should hear accurate existential risk information from different sides. Social proof is key. Being honest, sincere, coherent, and trying to receive as well as send, goes a long way, too.
Another update is that we will receive funding from the SFF and are in serious discussions with two other funds. We are really happy that this proves that our approach, reducing existential risk by informing the public debate, has backing in the existential risk community. We are still resource-constrained, but also massively manpower- and management-constrained. Our vision is a world where everyone is informed about existential risk. We cannot achieve this vision alone, but would like other institutes (new and existing) to join us in the communication effort. That we have received funding for informing the societal debate is evidence that others can, too. We are happy to share information about what we are doing and how others could do the same at talks, for example for local EA groups or at events.
Our targets for this year remain the same:
Publish at least three articles about existential risk in leading media in the Netherlands.
Publish at least three articles about existential risk in leading media in the US.
Receive funding for stability and future upscaling.
We will start working on next year’s targets in Q4.
Enough happened to write a small update about the Existential Risk Observatory.
First, we made progress in our core business: informing the public debate. We have published two more op-eds (in Dutch, one with a co-author from FLI) in a reputable, large newspaper. Our pieces warn against existential risk, especially from AGI, and propose low-hanging fruit type of measures the Dutch government could take to reduce risk (e.g. extra AI safety research).
A change w.r.t. the previous update, is that we see serious, leading journalists become interested in the topic. One leading columnist has already written a column about AI existential risk in a leading newspaper. Another journalist is planning to write a major article about it. This same person proposed having a debate about AI xrisk at the leading debate center, which would be well-positioned to influence yet others, and he proposed to use his network for the purpose. This is definitely not yet a fully-fledged informed societal debate yet, but it does update our expectations in relevant ways:
The idea of op-eds translating into broader media attention is realistic.
That attention is generally constructive, and not derogatory.
Most of the informing takes place in a social, personal context.
From our experience, the process is really to inform leaders of the societal debate, who then inform others. We have for example organized an existential risk drink, where thought leaders, EAs, and journalists could talk to each other, which worked very well. Key figures should hear accurate existential risk information from different sides. Social proof is key. Being honest, sincere, coherent, and trying to receive as well as send, goes a long way, too.
Another update is that we will receive funding from the SFF and are in serious discussions with two other funds. We are really happy that this proves that our approach, reducing existential risk by informing the public debate, has backing in the existential risk community. We are still resource-constrained, but also massively manpower- and management-constrained. Our vision is a world where everyone is informed about existential risk. We cannot achieve this vision alone, but would like other institutes (new and existing) to join us in the communication effort. That we have received funding for informing the societal debate is evidence that others can, too. We are happy to share information about what we are doing and how others could do the same at talks, for example for local EA groups or at events.
Our targets for this year remain the same:
Publish at least three articles about existential risk in leading media in the Netherlands.
Publish at least three articles about existential risk in leading media in the US.
Receive funding for stability and future upscaling.
We will start working on next year’s targets in Q4.