Given that they’ve made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don’t think we are in a great place to share our impressions right now, but if Connor would like me to, I’d be happy to share some of my takes in a personal capacity.
I agree that a good documentary about AI risk could be very valuable. I’m excited about broad AI risk outreach and few others seem to be stepping up. The proposal seem ambitious and exciting.
I suspect that a misleading documentary would be mildly net-negative, and it’s easy to be misleading. So far, a significant fraction of public communications from the AI safety community has been fairly misleading (definitely not all—there is some great work out there as well).
In particular, equivocating between harms like deepfakes and GCRs seems pretty bad. I think it’s fine to mention non-catastrophic harms, but often, the benefits of AI systems seem likely to dwarf them. More cooperative (and, in my view, effective) discourse should try to mention the upsides and transparently point to the scale of different harms.
In the past, team members have worked on (or at least in the same organisation) comms efforts that seemed low integrity and fairly net-negative to me (e.g., some of their work on deepfakes, and adversarial mobile billboards around the UK AI Safety summit). Idk if these specific team members were involved in those efforts.
The team seems very agentic and more likely to succeed than most “field-building” AIS teams.
Their plan seems pretty good to me (though I am not an expert in the area). I’m pretty into people just trying things. Seems like there are too few similar efforts, and like we could regret not making more stuff like this happen, particularly if your timelines are short.
I’m a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it’s somewhat net-negative. Overall, I think it’s probably pretty promising.
Thanks Caleb, very useful. @ConnorA I’m interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don’t know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
To be clear, I’m open to building broad coalitions and think that a good documentary could/would feature content on low-stakes risks; but, I believe people should be transparent about their motivations and avoid conflating non-GCR stuff with GCR stuff.
Will write up my full thoughts this weekend. But regarding your worry that our doc will end up conflating deepfakes and GCRs: we don’t plan to do this and we are very clear they are different.
Our model of the non-technical public is that they feel they are at higher risk of job loss than the world ending. So our film intends to explain clearly the potential risks to their jobs—and also show how that same AI that might automate their jobs, could also be used to, for example, create bioweapons for terrorists who may seek to deploy them on the world. We do not (and will not) conflate the two- but both will be included in the film.
To Oscar: thanks for the comment! Do get in touch if you’d like to help out/thinking of donating.
To Caleb: we really appreciate your comments here, and think they’re fair. But although we worked on comms with our former employers, we have different views/ways of communicating than them. (I still think Control AI and Conjecture did and do good comms work on the whole, though). I think if we grabbed a coffee/Zoom call we’d probably see we’re closer than you think.
Given that they’ve made a public Manifund application, it seems fine to share that there has been quite a lot of discussion about this project on the LTFF internally. I don’t think we are in a great place to share our impressions right now, but if Connor would like me to, I’d be happy to share some of my takes in a personal capacity.
Hey! Thanks for the comments. I’d be super happy to hear your personal takes, Caleb!
Some quick takes in a personal capacity:
I agree that a good documentary about AI risk could be very valuable. I’m excited about broad AI risk outreach and few others seem to be stepping up. The proposal seem ambitious and exciting.
I suspect that a misleading documentary would be mildly net-negative, and it’s easy to be misleading. So far, a significant fraction of public communications from the AI safety community has been fairly misleading (definitely not all—there is some great work out there as well).
In particular, equivocating between harms like deepfakes and GCRs seems pretty bad. I think it’s fine to mention non-catastrophic harms, but often, the benefits of AI systems seem likely to dwarf them. More cooperative (and, in my view, effective) discourse should try to mention the upsides and transparently point to the scale of different harms.
In the past, team members have worked on (or at least in the same organisation) comms efforts that seemed low integrity and fairly net-negative to me (e.g., some of their work on deepfakes, and adversarial mobile billboards around the UK AI Safety summit). Idk if these specific team members were involved in those efforts.
The team seems very agentic and more likely to succeed than most “field-building” AIS teams.
Their plan seems pretty good to me (though I am not an expert in the area). I’m pretty into people just trying things. Seems like there are too few similar efforts, and like we could regret not making more stuff like this happen, particularly if your timelines are short.
I’m a bit confused. Some donors should be very excited about this, and others should be much more on the fence or think it’s somewhat net-negative. Overall, I think it’s probably pretty promising.
Thanks Caleb, very useful. @ConnorA I’m interested in your thoughts re how to balance comms on catastrophic/existential risks and things like Deepfakes. (I don’t know about the particular past efforts Caleb mentioned, and I think I am more open to comms of Deepfakes being useful to develop a broader coalition, even though deepfakes are a tiny fraction of what I care about wrt AI.)
To be clear, I’m open to building broad coalitions and think that a good documentary could/would feature content on low-stakes risks; but, I believe people should be transparent about their motivations and avoid conflating non-GCR stuff with GCR stuff.
Thanks Caleb and Oscar!
Will write up my full thoughts this weekend. But regarding your worry that our doc will end up conflating deepfakes and GCRs: we don’t plan to do this and we are very clear they are different.
Our model of the non-technical public is that they feel they are at higher risk of job loss than the world ending. So our film intends to explain clearly the potential risks to their jobs—and also show how that same AI that might automate their jobs, could also be used to, for example, create bioweapons for terrorists who may seek to deploy them on the world. We do not (and will not) conflate the two- but both will be included in the film.
To Oscar: thanks for the comment! Do get in touch if you’d like to help out/thinking of donating.
To Caleb: we really appreciate your comments here, and think they’re fair. But although we worked on comms with our former employers, we have different views/ways of communicating than them. (I still think Control AI and Conjecture did and do good comms work on the whole, though). I think if we grabbed a coffee/Zoom call we’d probably see we’re closer than you think.
Have a good day!