Thanks for writing this. I appreciate the effort and sentiment. My quick and unpolished thoughts are below. I wrote this very quickly, so feel free to critique.
The TLDR is that I think that this is good with some caveats but also that we need more work on our ecosystem to be able to do outreach (and everything else) better.
I think we need a better AI Safety movement to properly do and benefit from outreach work. Otherwise, this and similar posts for outreach/action are somewhat like a call to arms without the strategy, weapons and logistics structure needed to support them.
Doing the things you mention is probably better than doing nothing (some of these more than others), but it’s far what is possible in terms or risk minimisation and expected impact.
What do we need for the AI Safety movement to properly do and benefit from outreach work?
I think that doing effective collective outreach will require us to be more centralised and coordinated.
Right now, we have people like you who seem to believe that we need to act urgently to engage people and raise awareness, in opposition to other influential people like Rohin Shah, Oliver Harbynka, who seem to oppose movement building (though this may just be the recruitment element).
The polarisation and uncertainty promotes inaction.
I therefore don’t think that we will get anything close to maximally effective awareness raising about AI risk until we have a related strategy and operational plan that has enough support from key stakeholders or is led by one key stakeholder (e.g., Will/Holden/Paul) and actioned by those who trust that person’s takes.
Here are my related (low confidence) intuitions (based on thisand related conversations mainly) for what to do next:
We need to find/fund/choose some/more people/process to drive overall strategy and operation for the mainstream AI Safety community. For instance, we could just have some sort of survey/voting system to capture community preferences/elect someone. I don’t know what makes sense now, but it’s worth thinking about.
When we know what the community/representatives see as the strategy and supporting operation, we need someone/some process to figure out who is responsible for executing the overall strategy and parts of the operations and communicating them to relevant people. We need behaviour level statements for ‘who needs to do what differently’.
When we know ‘who needs to do what differently’ we need to determine and address the blockers and enablers to scale and sustain the strategy and operation (e.g., we likely need researchers to find what communication works with different audiences, communicators to write things, connect with, and win over, influential/powerful people, recruiters to recruit the human resources, developers and designers to make persuasive digital media, managers to manage these groups, entrepreneurs to start and scale the project, and funders to support the whole thing etc).
Movement building/field building mean different things to different people and no-one knows what the community collectively support or oppose in this regard. This uncertainty reduces attempts to do anything on behalf of the community or the chances of success if anyone tries.
Perhaps because of this no-one who could curate preferences and set a direction (e.g., Will/Holden/Paul) feels confident to do so.
It’s potentially a chicken and egg or coincidence of wants problem where most people would like someone like Holden to drive the agenda, but he doesn’t know or thinks someone would be better suited (and they don’t know). Or the people who could lead somehow know that the community doesn’t want anyone to lead it in this way, but haven’t communicated this, so I don’t know that yet.
What happens if we keep going as we are?
I think that the EA community (with some exceptions) will mostly continue to function like a decentralised group of activists, posting conflicting opinions in different forums and social media channels, while doing high quality, but small scale, AI safety governance, technical and strategy work that is mostly known and respected in the communities it is produced in.
Various other more centralised groups with leaders like Sam Altman, Tristan Harris, Tina Gebru etc will drive the conversations and changes. That might be for the best, but I suspect not.
Urgent, unplanned communication by EA acting insolation poses many risks. If lots of people who don’t know what works for changing people’s minds and behaviours post lots of things about how they feel this could be bad.
These people could very well end up in isolated communities (e.g., just like many vegan activists I see who are mainly just reaching vegan followers on social media).
They could poison the well and make people associate AI safety with poorly informed and overconfident pessimists.
If people engage in civil disobedience we could end being feared and hated and subsequently excluded from consideration and conversation.
Our actions could create abiding associations that will damage later attempts to persuade by more persuasive sources.
This could be the unilateralist’s curse brought to life.
Other thoughts/suggestions
Test the communication in small scale (e.g., with a small sample of people on mechanical turk or with friends) before you do large scale outreach
Think about taking steps back to prioritise between the behaviour to rule out the ones with more downside risk (so better to write letters to representatives than posts to large audiences on social media if unsure what is persuasive).
Don’t do civil disobedience unless you have read the literature about when and where it works (and maybe just don’t do it—that could backfire badly).
Think about the AI Safety ecosystem and indirect ways to get more of what you want by influencing/aiding people or processes within it:
For instance, I’d like for progress on questions like:
- what are the main arguments for and against doing certain things (e.g., the AI pause/public awareness raising), what is the expert consensus on whether a strategy/action would be a good idea or not (e.g., what do superforcasters/AI orgs recommend)?
- When we have evidence for a strategy/action, then: Who needs to do what differently? Who do we need to communicate to, and what do we know is persuasive to them/how can we test that?
- Which current AI safety (e.g., technical, strategy, movement building) projects are the ones that are worth prioritising the allocation of resources (funding, time, advocacy) to etc? What do experts think? - What voices/messages can we amplify when we communicate? It’s much easier to share something good from an expert than write it.
- Who could work with others for mutual benefit but doesn’t realise it yet?
I am thinking about, and doing, a little of some of these things, but have other obligations for 3-6 months and some uncertainty about whether I am well suited to do them.
These all seem like good suggestions, if we still had years. But what if we really do only have months (to get a global AGI moratorium in place)? In some sense the “fog of war” may already be upon us (there are already too many further things for me to read and synthesise, and analysis paralysis seems like a great path toward death). How did action on Covid unfold? Did all these kinds of things happen first before we got to lockdowns?
vegan activists
This is quite different. It’s about personal survival of each and every person on Earth, and their families. (No concern for other people or animals is needed!)
This could be the unilateralist’s curse brought to life.
Could it possibly be worse than what’s already happened with DeepMind, OpenAI and Anthropic (and now Musk’s X.ai)?
have other obligations for 3-6 months
Is there any way you can get out of those other obligations? Time really is of the essence here. Things are alreadymovingfast, whether EA/the AI Safety community is coordinated and onboard or not.
Peter—good post; these all seem reasonable as comments.
However, let me offer a counter-point, based on my pretty active engagement on Twitter about AI X-risk over the last few weeks: it’s often very hard to predict which public outreach strategies, messages, memes, and points will resonate with the public, until we try them out. I’ve often been very surprised about which ideas really get traction, and which don’t. I’ve been surprised that meme accounts such as @AISafetyMemes have been pretty influential. I’ve also been amazed at how (unwittingly) effective Yann LeCun’s recklessly anti-safety tweets have been at making people wary of the AI industry and its hubris.
This unpredictability of public responses might seriously limit the benefits of carefully planned, centrally organized activism about AI risk. It might be best just to encourage everybody who’s interested to try out some public arguments, get feedback, pay attention to what works, identify common misunderstandings and pain points, share tactics with like-minded others, and iterate.
Also, lack of formal central organization limits many of the reputational risks of social media activism. If I say something embarrassing or stupid as my Twitter persona @primalpoly, that’s just a reflection on that persona (and to some extent, me), not on any formal organization. Whereas if I was the grand high vice-invigilator (or whatever) in some AI safety group, my bad tweets could tarnish the whole safety group.
My hunch is that a fast, agile, grassroots, decentralized campaign of raising AI X risk awareness could be much more effective than the kind of carefully-constructed, clearly-missioned, reputationally-paranoid organizations that EAs have traditionally favored.
Thanks for writing this. I appreciate the effort and sentiment. My quick and unpolished thoughts are below. I wrote this very quickly, so feel free to critique.
The TLDR is that I think that this is good with some caveats but also that we need more work on our ecosystem to be able to do outreach (and everything else) better.
I think we need a better AI Safety movement to properly do and benefit from outreach work. Otherwise, this and similar posts for outreach/action are somewhat like a call to arms without the strategy, weapons and logistics structure needed to support them.
Doing the things you mention is probably better than doing nothing (some of these more than others), but it’s far what is possible in terms or risk minimisation and expected impact.
What do we need for the AI Safety movement to properly do and benefit from outreach work?
I think that doing effective collective outreach will require us to be more centralised and coordinated.
Right now, we have people like you who seem to believe that we need to act urgently to engage people and raise awareness, in opposition to other influential people like Rohin Shah, Oliver Harbynka, who seem to oppose movement building (though this may just be the recruitment element).
The polarisation and uncertainty promotes inaction.
I therefore don’t think that we will get anything close to maximally effective awareness raising about AI risk until we have a related strategy and operational plan that has enough support from key stakeholders or is led by one key stakeholder (e.g., Will/Holden/Paul) and actioned by those who trust that person’s takes.
Here are my related (low confidence) intuitions (based on thisand related conversations mainly) for what to do next:
We need to find/fund/choose some/more people/process to drive overall strategy and operation for the mainstream AI Safety community. For instance, we could just have some sort of survey/voting system to capture community preferences/elect someone. I don’t know what makes sense now, but it’s worth thinking about.
When we know what the community/representatives see as the strategy and supporting operation, we need someone/some process to figure out who is responsible for executing the overall strategy and parts of the operations and communicating them to relevant people. We need behaviour level statements for ‘who needs to do what differently’.
When we know ‘who needs to do what differently’ we need to determine and address the blockers and enablers to scale and sustain the strategy and operation (e.g., we likely need researchers to find what communication works with different audiences, communicators to write things, connect with, and win over, influential/powerful people, recruiters to recruit the human resources, developers and designers to make persuasive digital media, managers to manage these groups, entrepreneurs to start and scale the project, and funders to support the whole thing etc).
It’s a big ask, but it might be our best shot.
Why hasn’t somebody done this already?
As I see it, the main reason for all of the above is a lack of shared language and understanding which merged because of how the AI safety community developed.
Movement building/field building mean different things to different people and no-one knows what the community collectively support or oppose in this regard. This uncertainty reduces attempts to do anything on behalf of the community or the chances of success if anyone tries.
Perhaps because of this no-one who could curate preferences and set a direction (e.g., Will/Holden/Paul) feels confident to do so.
It’s potentially a chicken and egg or coincidence of wants problem where most people would like someone like Holden to drive the agenda, but he doesn’t know or thinks someone would be better suited (and they don’t know). Or the people who could lead somehow know that the community doesn’t want anyone to lead it in this way, but haven’t communicated this, so I don’t know that yet.
What happens if we keep going as we are?
I think that the EA community (with some exceptions) will mostly continue to function like a decentralised group of activists, posting conflicting opinions in different forums and social media channels, while doing high quality, but small scale, AI safety governance, technical and strategy work that is mostly known and respected in the communities it is produced in.
Various other more centralised groups with leaders like Sam Altman, Tristan Harris, Tina Gebru etc will drive the conversations and changes. That might be for the best, but I suspect not.
Urgent, unplanned communication by EA acting insolation poses many risks. If lots of people who don’t know what works for changing people’s minds and behaviours post lots of things about how they feel this could be bad.
These people could very well end up in isolated communities (e.g., just like many vegan activists I see who are mainly just reaching vegan followers on social media).
They could poison the well and make people associate AI safety with poorly informed and overconfident pessimists.
If people engage in civil disobedience we could end being feared and hated and subsequently excluded from consideration and conversation.
Our actions could create abiding associations that will damage later attempts to persuade by more persuasive sources.
This could be the unilateralist’s curse brought to life.
Other thoughts/suggestions
Test the communication in small scale (e.g., with a small sample of people on mechanical turk or with friends) before you do large scale outreach
Think about taking steps back to prioritise between the behaviour to rule out the ones with more downside risk (so better to write letters to representatives than posts to large audiences on social media if unsure what is persuasive).
Don’t do civil disobedience unless you have read the literature about when and where it works (and maybe just don’t do it—that could backfire badly).
Think about the AI Safety ecosystem and indirect ways to get more of what you want by influencing/aiding people or processes within it:
For instance, I’d like for progress on questions like:
- what are the main arguments for and against doing certain things (e.g., the AI pause/public awareness raising), what is the expert consensus on whether a strategy/action would be a good idea or not (e.g., what do superforcasters/AI orgs recommend)?
- When we have evidence for a strategy/action, then: Who needs to do what differently? Who do we need to communicate to, and what do we know is persuasive to them/how can we test that?
- Which current AI safety (e.g., technical, strategy, movement building) projects are the ones that are worth prioritising the allocation of resources (funding, time, advocacy) to etc? What do experts think?
- What voices/messages can we amplify when we communicate? It’s much easier to share something good from an expert than write it.
- Who could work with others for mutual benefit but doesn’t realise it yet?
I am thinking about, and doing, a little of some of these things, but have other obligations for 3-6 months and some uncertainty about whether I am well suited to do them.
These all seem like good suggestions, if we still had years. But what if we really do only have months (to get a global AGI moratorium in place)? In some sense the “fog of war” may already be upon us (there are already too many further things for me to read and synthesise, and analysis paralysis seems like a great path toward death). How did action on Covid unfold? Did all these kinds of things happen first before we got to lockdowns?
This is quite different. It’s about personal survival of each and every person on Earth, and their families. (No concern for other people or animals is needed!)
Could it possibly be worse than what’s already happened with DeepMind, OpenAI and Anthropic (and now Musk’s X.ai)?
Is there any way you can get out of those other obligations? Time really is of the essence here. Things are already moving fast, whether EA/the AI Safety community is coordinated and onboard or not.
Peter—good post; these all seem reasonable as comments.
However, let me offer a counter-point, based on my pretty active engagement on Twitter about AI X-risk over the last few weeks: it’s often very hard to predict which public outreach strategies, messages, memes, and points will resonate with the public, until we try them out. I’ve often been very surprised about which ideas really get traction, and which don’t. I’ve been surprised that meme accounts such as @AISafetyMemes have been pretty influential. I’ve also been amazed at how (unwittingly) effective Yann LeCun’s recklessly anti-safety tweets have been at making people wary of the AI industry and its hubris.
This unpredictability of public responses might seriously limit the benefits of carefully planned, centrally organized activism about AI risk. It might be best just to encourage everybody who’s interested to try out some public arguments, get feedback, pay attention to what works, identify common misunderstandings and pain points, share tactics with like-minded others, and iterate.
Also, lack of formal central organization limits many of the reputational risks of social media activism. If I say something embarrassing or stupid as my Twitter persona @primalpoly, that’s just a reflection on that persona (and to some extent, me), not on any formal organization. Whereas if I was the grand high vice-invigilator (or whatever) in some AI safety group, my bad tweets could tarnish the whole safety group.
My hunch is that a fast, agile, grassroots, decentralized campaign of raising AI X risk awareness could be much more effective than the kind of carefully-constructed, clearly-missioned, reputationally-paranoid organizations that EAs have traditionally favored.