A communications specialist who could think of 100 different maximally-effective ways to explain AI safety to a general audience, many of them in less than two minutes and some in a couple sentences.
Have you seen the results of the AI Safety Arguments contest? It’s the best resource I know of (more time-efficient than most resources), although it would be great if someone could set up an even more time-efficient general-purpose persuasion-assistance rhetoric resource.
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be. It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done. What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.
A communications specialist who could think of 100 different maximally-effective ways to explain AI safety to a general audience, many of them in less than two minutes and some in a couple sentences.
I’m about to start as Head of Communications at CEA, and think this would be a very useful brainstorming exercise — thanks for the suggestion!
Have you seen the results of the AI Safety Arguments contest? It’s the best resource I know of (more time-efficient than most resources), although it would be great if someone could set up an even more time-efficient general-purpose persuasion-assistance rhetoric resource.
I had not seen this, thanks for sharing!
Wow! If you’d like to share drafts of things like that in a place that I could see them, I’m interested!
I believe I could do this. My background is just writing, argument, and constitution of community, I guess.
An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.
I’m not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It’s as if people don’t want to believe that EA is what it claims to be.
It’s been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA’s case, knowing right from wrong.
I guess a solution there is to convince the reader that rationality/practical ethics isn’t just a tool for showing off for others (which is zero-sum, and so we wouldn’t collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don’t think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that’s where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren’t doing good effectively?
Please make sure to enter this contest before the deadline!
Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.