If your question is, why aren’t people doing more of that sort of thing? Then yeah, that’s a good question. If I was the AI Safety Funding Czar, I would be allocating a bigger budget to media projects (both social media and traditional media).
There are two arguments against giving marginal funding to media projects that I actually believe:
My guess is that public protests are more cost-effective right now, because (a) they’re more neglected (b) they naturally generate media attention, and perhaps (c) they are more dramatic which leads people to take the AI x-risk problem more seriously.
I also expect some kinds of policy work to be more cost-effective. There’s already a lot of policy research happening but I think we need more (a) people talking honestly to policymakers about x-risk and (b) writing legislation targeted at reducing x-risk. Policy has the advantage that you don’t need to change as many minds to have a large impact, but it has the disadvantage that those minds are particularly hard to change—a huge chunk of their job is listening to people saying “please pay attention to my issue”, so you have a lot of competition.
There are other arguments that I don’t believe, although I expect some people have arguments that have never even occurred to me. The main arguments I can think of that I don’t find persuasive are
It’s hopeless to try to make AI safer via public opinion / the people developing AI don’t care about public opinion.
We should mainly fund technical research instead, e.g. because the technical problems in AI safety are more tractable.
Public-facing messages will inevitably be misunderstood and distorted and we will end up in a worse place than where we started.
If media projects succeed, then we will get regulations that slow down AI development, but we need to go as fast as possible to usher in the glorious transhumanist future or to beat China or whatever.
Some people are promoting social media awareness of x-risks, for example that Kurzgesagt video, which was funded by Open Philanthropy[1]. There’s also Doom Debates, Robert Miles’s YouTube channel, and some others. There are some media projects on Manifund too, for example this one.
If your question is, why aren’t people doing more of that sort of thing? Then yeah, that’s a good question. If I was the AI Safety Funding Czar, I would be allocating a bigger budget to media projects (both social media and traditional media).
There are two arguments against giving marginal funding to media projects that I actually believe:
My guess is that public protests are more cost-effective right now, because (a) they’re more neglected (b) they naturally generate media attention, and perhaps (c) they are more dramatic which leads people to take the AI x-risk problem more seriously.
I also expect some kinds of policy work to be more cost-effective. There’s already a lot of policy research happening but I think we need more (a) people talking honestly to policymakers about x-risk and (b) writing legislation targeted at reducing x-risk. Policy has the advantage that you don’t need to change as many minds to have a large impact, but it has the disadvantage that those minds are particularly hard to change—a huge chunk of their job is listening to people saying “please pay attention to my issue”, so you have a lot of competition.
There are other arguments that I don’t believe, although I expect some people have arguments that have never even occurred to me. The main arguments I can think of that I don’t find persuasive are
It’s hopeless to try to make AI safer via public opinion / the people developing AI don’t care about public opinion.
We should mainly fund technical research instead, e.g. because the technical problems in AI safety are more tractable.
Public-facing messages will inevitably be misunderstood and distorted and we will end up in a worse place than where we started.
If media projects succeed, then we will get regulations that slow down AI development, but we need to go as fast as possible to usher in the glorious transhumanist future or to beat China or whatever.
I don’t know for sure that that specific video was part of the Open Philanthropy grant, but I’m guessing it was based on its content.