The idea of running an event in particular seems misguided. Conventions come after conversations. Real progress toward understanding, or conveying understanding, does not happen through speakers going On Stage at big events. If speakers On Stage ever say anything sensible, it’s because an edifice of knowledge was built in the background out of people having real, engaged, and constructive arguments with each other, in private where constructive conversations can actually happen, and the speaker On Stage is quoting from that edifice.
(This is also true of journal publications about anything strategic-ish—most journal publications about AI alignment come from the void and are shouting into the void, neither aware of past work nor feeling obliged to engage with any criticism. Lesser (or greater) versions of this phenomenon occur in many fields; part of where the great replication crisis comes from is that people can go on citing refuted studies and nothing embarrassing happens to them, because god forbid there be a real comments section or an email reply that goes out to the whole mailing list.)
If there’s something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape, then put Nate Soares in a room with somebody in national security who has a computer science background, and let them have a real conversation. Until that real progress has already been made in in-person conversations happening in the background where people are actually trying to say sensible things and justify their reasoning to one another, having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism, words coming from the void and falling into the void. This seems net counterproductive.
These seem like reasonable points in isolation, but I’m not sure they answer the first question as actually posed. In particular:
Why would it necessarily be ‘a bunch of people new to the problem [spouting] whatever errors they’ve thought up in the first five seconds of thinking’? Jay’s spectrum of suggestions was wide and included a video or podcast. With that kind of thing there would appear to be ample scope to either have someone experienced with the problem doing the presenting or it could be reviewed by the people with relevant expertise before being released. A Big Event On Stage wasn’t the only thing on offer.
The actual question in the post was “I have little doubt that if I reached out to two random poverty or animal-focused EAs with the pitch “I can get a bunch of respected journalists, academics, and policymakers to hear the exact perspective you want me to share with them on our trusted/prestigious platform,” they would be pretty psyched about that (as I think they should be). So what’s so different about AI safety?” I don’t really know what your answer to this is; is AI particularly vulnerable to the downsides you described (Why?). Or are the other areas of EA making a mistake?
“If there’s something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape,...” I’m pretty surprised that the start of this sentence is phrased as ‘if there is’ rather than a ‘while there is certainly’, so I want to check: is that deliberate; i.e. are you actually sceptical about whether there’s anything that national security higher-ups have to offer?
If you actually don’t think there’s anything to be gained from cooperation between AGI alignment people and national security people, the weakness of your other objections makes more sense, because they aren’t really your true rejection; your true rejection is that there’s no upside and some potential downsides.
Are these recommendations based on sound empirical data (e.g. a survey of AI researchers who’ve come to realize AI risk is a thing, asking them what they were exposed to and what they found persuasive), or just guessing/personal observation?
If persuasive speaking is an ineffective way of spreading concern for AI risk, then we live in one of two worlds.
In the first world, the one you seem to imply we live in, persuasive speaking is ineffective for most things, and in particular it’s ineffective for AI risk. In this world, I’d expect training in persuasive speaking (whether at a 21st century law school or an academy in Ancient Greece) to be largely a waste of time. I would be surprised if this is true. The only data I could find offhand related to the question is from Robin Hanson: “The initially disfavored side [in a debate] almost always gains a lot… my guess is that hearing half of a long hi-profile argument time devoted to something makes it seem more equally plausible.”
In the second world, public speaking is effective persuasion in at least some cases, but there’s something about this particular case that makes public speaking a bad fit. This seems more plausible, but it could also be a case of ineffective speakers or an ineffective presentation. It’s also important to have good measurement methods: for example, if most post-presentation questions offer various objections, it’s still possible that your presentation was persuasive to the majority of the audience.
I’m not saying all this because I think events are a particularly promising way to persuade people here. Rather, I think this issue is important enough that our actions should be determined by data whenever it’s possible. (Might be worthwhile to do that survey if it hasn’t been done already.)
I also think the burden of proof for a strategy focused primarily on personal conversations should be really high. Personal conversations are about the least scalable method of persuasion. Satvik Beri recommends that businesses do sales first to figure out how to overcome common objections, then use a sales pitch that’s known to be effective as marketing copy. A similar strategy could work here: take notes on common objections & the best ways to refute them after personal conversations, then use that knowledge to inform the creation of scalable persuasive content like books/talks/blog posts.
|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism...
I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.
One way to have interesting conversations—is to have them on a dinner between public speeches on a conference. The most interesting thing during conferences is informal connection between people during breaks and during evenings. A conference is just a cause to collect right people together and put topic frame. So such conference may help to connect national security people and AI safety people.
But I have feeling from previous conversation is that current wisdom of AI people is that government people are unable to understand their complex problems and also are not players in the game in AI creation. Only hackers and corporations are. I don’t think that it is tight approach.
I have heard about retreats and closed conferences/workshops to get people together, I would imagine something like that would be better from the point of view that Eliezer is coming from.
In order for people to have useful conversations where genuine reasoning and thinking is done, they have to actually meet each other.
The idea of running an event in particular seems misguided. Conventions come after conversations. Real progress toward understanding, or conveying understanding, does not happen through speakers going On Stage at big events. If speakers On Stage ever say anything sensible, it’s because an edifice of knowledge was built in the background out of people having real, engaged, and constructive arguments with each other, in private where constructive conversations can actually happen, and the speaker On Stage is quoting from that edifice.
(This is also true of journal publications about anything strategic-ish—most journal publications about AI alignment come from the void and are shouting into the void, neither aware of past work nor feeling obliged to engage with any criticism. Lesser (or greater) versions of this phenomenon occur in many fields; part of where the great replication crisis comes from is that people can go on citing refuted studies and nothing embarrassing happens to them, because god forbid there be a real comments section or an email reply that goes out to the whole mailing list.)
If there’s something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape, then put Nate Soares in a room with somebody in national security who has a computer science background, and let them have a real conversation. Until that real progress has already been made in in-person conversations happening in the background where people are actually trying to say sensible things and justify their reasoning to one another, having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism, words coming from the void and falling into the void. This seems net counterproductive.
These seem like reasonable points in isolation, but I’m not sure they answer the first question as actually posed. In particular:
Why would it necessarily be ‘a bunch of people new to the problem [spouting] whatever errors they’ve thought up in the first five seconds of thinking’? Jay’s spectrum of suggestions was wide and included a video or podcast. With that kind of thing there would appear to be ample scope to either have someone experienced with the problem doing the presenting or it could be reviewed by the people with relevant expertise before being released. A Big Event On Stage wasn’t the only thing on offer.
The actual question in the post was “I have little doubt that if I reached out to two random poverty or animal-focused EAs with the pitch “I can get a bunch of respected journalists, academics, and policymakers to hear the exact perspective you want me to share with them on our trusted/prestigious platform,” they would be pretty psyched about that (as I think they should be). So what’s so different about AI safety?” I don’t really know what your answer to this is; is AI particularly vulnerable to the downsides you described (Why?). Or are the other areas of EA making a mistake?
“If there’s something to be gained from having national-security higher-ups understanding the AGI alignment strategic landscape, or from having alignment people understand the national security landscape,...” I’m pretty surprised that the start of this sentence is phrased as ‘if there is’ rather than a ‘while there is certainly’, so I want to check: is that deliberate; i.e. are you actually sceptical about whether there’s anything that national security higher-ups have to offer?
If you actually don’t think there’s anything to be gained from cooperation between AGI alignment people and national security people, the weakness of your other objections makes more sense, because they aren’t really your true rejection; your true rejection is that there’s no upside and some potential downsides.
Are these recommendations based on sound empirical data (e.g. a survey of AI researchers who’ve come to realize AI risk is a thing, asking them what they were exposed to and what they found persuasive), or just guessing/personal observation?
If persuasive speaking is an ineffective way of spreading concern for AI risk, then we live in one of two worlds.
In the first world, the one you seem to imply we live in, persuasive speaking is ineffective for most things, and in particular it’s ineffective for AI risk. In this world, I’d expect training in persuasive speaking (whether at a 21st century law school or an academy in Ancient Greece) to be largely a waste of time. I would be surprised if this is true. The only data I could find offhand related to the question is from Robin Hanson: “The initially disfavored side [in a debate] almost always gains a lot… my guess is that hearing half of a long hi-profile argument time devoted to something makes it seem more equally plausible.”
In the second world, public speaking is effective persuasion in at least some cases, but there’s something about this particular case that makes public speaking a bad fit. This seems more plausible, but it could also be a case of ineffective speakers or an ineffective presentation. It’s also important to have good measurement methods: for example, if most post-presentation questions offer various objections, it’s still possible that your presentation was persuasive to the majority of the audience.
I’m not saying all this because I think events are a particularly promising way to persuade people here. Rather, I think this issue is important enough that our actions should be determined by data whenever it’s possible. (Might be worthwhile to do that survey if it hasn’t been done already.)
I also think the burden of proof for a strategy focused primarily on personal conversations should be really high. Personal conversations are about the least scalable method of persuasion. Satvik Beri recommends that businesses do sales first to figure out how to overcome common objections, then use a sales pitch that’s known to be effective as marketing copy. A similar strategy could work here: take notes on common objections & the best ways to refute them after personal conversations, then use that knowledge to inform the creation of scalable persuasive content like books/talks/blog posts.
|...having a Big Event with people On Stage is just a giant opportunity for a bunch of people new to the problem to spout out whatever errors they thought up in the first five seconds of thinking, neither aware of past work nor expecting to engage with detailed criticism...
I had to go back and double-check that this comment was written before Asilomar 2017. It describes some of the talks very well.
One way to have interesting conversations—is to have them on a dinner between public speeches on a conference. The most interesting thing during conferences is informal connection between people during breaks and during evenings. A conference is just a cause to collect right people together and put topic frame. So such conference may help to connect national security people and AI safety people.
But I have feeling from previous conversation is that current wisdom of AI people is that government people are unable to understand their complex problems and also are not players in the game in AI creation. Only hackers and corporations are. I don’t think that it is tight approach.
I have heard about retreats and closed conferences/workshops to get people together, I would imagine something like that would be better from the point of view that Eliezer is coming from.
In order for people to have useful conversations where genuine reasoning and thinking is done, they have to actually meet each other.