Great work! I’m really excited about seeing more quality EA/longtermism-related content on YouTube (in the mold of e.g. Kurzgesagt and Rob Miles) and this was a fun and engaging example. Especially enjoyed the cartoon depictions of Nick Bostrom and Toby Ord :)
Quick note: the link in the video description for ‘The Case for Strong Longtermism’ currently links to an old version. It has since been significantly revised, so you might consider linking to that instead.
Great work! I’m really excited about seeing more quality EA/longtermism-related content on YouTube (in the mold of e.g. Kurzgesagt and Rob Miles) and this was a fun and engaging example. Especially enjoyed the cartoon depictions of Nick Bostrom and Toby Ord :)
Agreed!
I was initially a little worried when I saw that one of the first videos this channel put up had “longtermism” right in the title, since explicitly using an EA/longtermism branding increases the potential downsides from giving a lot of people a bad first impression or something. (See also the awareness/inclination model of movement growth.) But:
The video seemed to me to capture important arguments pretty well, as well as just being engaging, looking fairly professional, etc. So I’m now glad it exists and is getting a bunch of views already.
The fact Rob Miles narrated it means at least one person with good judgement on these matters other than the people behind this channel looked at the script+vid before release
Hopefully some more did too?
(Looping such people in seems like a good way to mitigate possible risks on that front, as well as just to increase the quality and upsides)
Some caveats to that positive view (which I’m writing here because Writer indicated elsewhere an interest in feedback):
As Fin and I separately noted below, there were 2 mistakes re the presentation of Ord’s estimate
I think this suggests to me that it’d be good to slightly increase the number of people from which you seek feedback on scripts, or otherwise adjust your process for seeking feedback (e.g., give people more time to give feedback, or appoint one person to be especially thorough to mitigate diffusion of responsibility)
But these weren’t critical errors; they won’t like turn someone off longtermism or something
I think some people would prefer if introductions to longtermism less often emphasised or started with the astronomical waste framing.
But I think the main worry they have is with something like “Even if the probability was just [extremely tiny number], when you multiply that by [extremely big number] it’s still huge.” And this video didn’t really explicitly do that, so it seems probably fine to me.
(One reason to be worried about that framing is that it sounds fanatical, and the actual case may not require fanaticism—the probabilities might just not be extremely tiny.)
Agreed. I don’t think this video got anything badly wrong, but do be aware that there are plenty of EA types on this forum and elsewhere who would be happy to read over and comment on scripts.
One other minor mistake (I think): The video says “All the impact would derive from increasing the chance that current humans will get to experience extremely long and fulfilling lives as a result of hastening transformative technological development.” But one can also make a case for existential risk reduction based on more everyday concerns like not having 8 billion people die, be pushed into a collapsed society, or be pushed into a dystopia.
I don’t think that it’s worth adding a clarification to the video regarding this—I think this is a non-central point—but I think it also weakly pushes in favour of getting a little more feedback before uploading
(But again, I do think the video is good and I’m glad it exists and is getting viewed!)
This makes me think that I would definitely use more feedback. It makes me kind of sad that I could have added this point myself (together with precising the estimate by Toby Ord) and didn’t because of...? I don’t know exactly.
Edit: one easy thing I could do is to post scripts in advance here or on LW and request feedback (other than messaging people directly and waiting for their answers, although this is often slow and fatiguing).
Edit 2: Oh, and the Slack group. Surely more ideas will come to me, I’ll stop adding edits now.
Hey- Don’t be sad. It’s really brave to do work like this and make it public—thank you :) I’d definitely recommend this to newcomers (Indeed, I just did! Edited to add: The animations are so cool here!). You might consider sending future drafts through to CEA’s press team- they have a large network of people and I have always found the quality of their feedback to be really high and helpful! Thanks again for this!
Hey, thanks a lot for this comment. It did brighten my mood.
I think I’ll definitely want to send scripts to CEA’s press team, especially if they are heavily EA related like this one. Do you know how can I contact them? (I’m not sure I know what’s the CEA’s press team. Do you mean that I should just send an e-mail to CEA via their website?)
Absolutely :) Sky Mayhew <sky@centreforeffectivealtruism.org> and <media@centreforeffectivealtruism.org>. Sky is incredibly kind and has always given me brilliant feedback when I’ve had to do external-facing stuff before (which I find intensely nerve-wracking and have to do quite a lot of ). I can’t speak more highly of their help. The people who have given me feedback on my sharing your video have mainly done so in the form of WOW face reactions on Slack FWIW- I’ll let you know if I get anything else. My husband also loves this type of stuff and was psyched to watch your video too. :)
FWIW, I think it’s almost inevitable that a script of roughly the length you had will initially contain some minor errors. I don’t think you should feel bad about that or like it’s specific to you or something. I think it just means it’s good to get feedback :)
(And I think these minor errors in this case didn’t like create notably downside risk or something, and it is still a good video.)
Thank you a lot for this feedback. I was perceiving that the main worry people here were having about Rational Animations was potential downsides, and that they were having this feeling mainly due to our artistic choices (because I was getting few upvotes, some downvotes and feedback that didn’t feel very “central” I figured that there was probably something about the whole thing that was putting off some people but that they weren’t able to put their finger on. Am I correct in this assessment?). I feel like I want to kind of update the direction of the channel because of this, but I need some time to reflect and maybe write a post to understand if what I’m thinking is what people here are also thinking.
Also, stepping back, I think you mainly need to answer two questions, which suggest different types of required data, neither of which is karma on the Forum/LessWrong:
Will it be net positive for these videos to be widely viewed by people who aren’t (yet) highly engaged in the EA/rationality communities? How positive? Can the upsides of that be increased or the downsides decreased?
Will these videos be widely viewed by people who aren’t (yet) highly engaged in the EA/rationality communities?
I think Q1 is best answered through actively soliciting feedback on video ideas, scripts, rough cuts, etc. from specific people or groups who are unusually likely to have good judgement on such things. This could be people who’ve done somewhat similar projects like Rob Miles, could be engaged EAs who know about whatever topic you’re covering in a given vid, could be non-EAs who know about whatever topic you’re covering in a vid, or groups in which some of those types people can be found (e.g., the Slack I made).
I think Q2 is best answered by the number of views your videos to date have gotten, the likes vs dislikes, the comments on YouTube, etc.
I think Forum/LessWrong karma does serve as weak evidence on both questions, but only weak evidence. Karma is a very noisy and coarse-grained metric. (So I don’t think getting low karma is a very bad sign, and I think there are better things to be looking at.)
This is very very helpful feedback, thank you for taking the time to give it (here and on the other post). Also, I’m way less anxious getting feedback like this than trying to hopelessly gauge things by upvotes and downvotes. I think I need to talk more to individual EAs and engage more with comments/express my doubts more like I’m doing now. My initial instinct was to run away (post/interact less), but this feels much better other than being more helpful.
Yeah, I think it is worth often posting, but that this is only partly because you get feedback and more so for other reasons (e.g., making it more likely that people will find your post/videos if it’s relevant to them, via seeing it on the home page, finding it in a search, or tags). And comments are generally more useful as feedback than karma, and comments from specific people asked in places outside the Forum are generally more useful as feedback than Forum comments.
The category “work that’s for large, mostly-non-EA audiences” can be prone to downside risks, so it’s worth thinking about them at least briefly when thinking about something from that category
Making videos hasn’t been tried much by EAs (excluding things like just videos of presentations / Q&As), so people have less of an idea of how to weigh up the upsides and downsides for that type of thing
And this is even more true for animated videos with a substantial sense of humour / substantial “quirkiness”
So some people are sort-of reflexively worried or unsure what to think
I think that this makes not upvoting a pretty reasonable choice for someone who just sees the post/video and doesn’t take the time to think about the pros and cons; maybe they want to neither encourage nor discourage something that’s unfamiliar to them and that they don’t know the implications of. I wouldn’t endorse the downvotes myself, but I guess maybe they’re based on something similar but with a larger degree of worry.
Or maybe some people just personally don’t enjoy the style/videos, separate from their beliefs about whether the channel should exist.
I wouldn’t guess that lots of people are actively opposed to specific artistic choices. Though I could of course be wrong.
I should also mention that Toby Ord’s 1⁄6 (17ish%) figure is for the chance of extinction this century, which isn’t made totally clear in the video (although I appreciate not much can be done about that)!
Updated the description with “Clarification: Toby Ord estimates that the chance of human extinction this century (and not in general) is around 1⁄6, which is 16.66...%”
Actually, that’s his estimate of existential catastrophe this century, which includes both extinction and unrecoverable collapse or unrecoverable dystopia. (I seem to recall that Ord also happens to think extinction is substantially more likely this century than the other two types of existential catastrophe, but I can’t remember for sure, and in any case his 1⁄6 estimate is stated as being about existential catastrophe.)
Maybe you could say: “Clarification: Toby Ord estimates that the chance of an existential catastrophe (not just extinction) this century (not across all time) is around 1⁄6; his estimate for extinction only might be lower, and his estimate across all time is higher. For more on the concept of existential catastrophe, see https://www.existential-risk.org/concept.pdf″
Copy-pasted your text and included both links with an “and”. Also… any feedback on the video? This post’s performance seems a little disappointing to me, as always.
Great work! I’m really excited about seeing more quality EA/longtermism-related content on YouTube (in the mold of e.g. Kurzgesagt and Rob Miles) and this was a fun and engaging example. Especially enjoyed the cartoon depictions of Nick Bostrom and Toby Ord :)
Quick note: the link in the video description for ‘The Case for Strong Longtermism’ currently links to an old version. It has since been significantly revised, so you might consider linking to that instead.
Agreed!
I was initially a little worried when I saw that one of the first videos this channel put up had “longtermism” right in the title, since explicitly using an EA/longtermism branding increases the potential downsides from giving a lot of people a bad first impression or something. (See also the awareness/inclination model of movement growth.) But:
The video seemed to me to capture important arguments pretty well, as well as just being engaging, looking fairly professional, etc. So I’m now glad it exists and is getting a bunch of views already.
The fact Rob Miles narrated it means at least one person with good judgement on these matters other than the people behind this channel looked at the script+vid before release
Hopefully some more did too?
(Looping such people in seems like a good way to mitigate possible risks on that front, as well as just to increase the quality and upsides)
Some caveats to that positive view (which I’m writing here because Writer indicated elsewhere an interest in feedback):
As Fin and I separately noted below, there were 2 mistakes re the presentation of Ord’s estimate
I think this suggests to me that it’d be good to slightly increase the number of people from which you seek feedback on scripts, or otherwise adjust your process for seeking feedback (e.g., give people more time to give feedback, or appoint one person to be especially thorough to mitigate diffusion of responsibility)
But these weren’t critical errors; they won’t like turn someone off longtermism or something
I think some people would prefer if introductions to longtermism less often emphasised or started with the astronomical waste framing.
But I think the main worry they have is with something like “Even if the probability was just [extremely tiny number], when you multiply that by [extremely big number] it’s still huge.” And this video didn’t really explicitly do that, so it seems probably fine to me.
(One reason to be worried about that framing is that it sounds fanatical, and the actual case may not require fanaticism—the probabilities might just not be extremely tiny.)
Agreed. I don’t think this video got anything badly wrong, but do be aware that there are plenty of EA types on this forum and elsewhere who would be happy to read over and comment on scripts.
One other minor mistake (I think): The video says “All the impact would derive from increasing the chance that current humans will get to experience extremely long and fulfilling lives as a result of hastening transformative technological development.” But one can also make a case for existential risk reduction based on more everyday concerns like not having 8 billion people die, be pushed into a collapsed society, or be pushed into a dystopia.
See also https://forum.effectivealtruism.org/tag/ethics-of-existential-risk and https://forum.effectivealtruism.org/posts/dfiKak8ZPa46N7Np6/the-person-affecting-value-of-existential-risk-reduction
I don’t think that it’s worth adding a clarification to the video regarding this—I think this is a non-central point—but I think it also weakly pushes in favour of getting a little more feedback before uploading
(But again, I do think the video is good and I’m glad it exists and is getting viewed!)
This makes me think that I would definitely use more feedback. It makes me kind of sad that I could have added this point myself (together with precising the estimate by Toby Ord) and didn’t because of...? I don’t know exactly.
Edit: one easy thing I could do is to post scripts in advance here or on LW and request feedback (other than messaging people directly and waiting for their answers, although this is often slow and fatiguing).
Edit 2: Oh, and the Slack group. Surely more ideas will come to me, I’ll stop adding edits now.
Hey- Don’t be sad. It’s really brave to do work like this and make it public—thank you :) I’d definitely recommend this to newcomers (Indeed, I just did! Edited to add: The animations are so cool here!). You might consider sending future drafts through to CEA’s press team- they have a large network of people and I have always found the quality of their feedback to be really high and helpful! Thanks again for this!
Hey, thanks a lot for this comment. It did brighten my mood.
I think I’ll definitely want to send scripts to CEA’s press team, especially if they are heavily EA related like this one. Do you know how can I contact them? (I’m not sure I know what’s the CEA’s press team. Do you mean that I should just send an e-mail to CEA via their website?)
Absolutely :) Sky Mayhew <sky@centreforeffectivealtruism.org> and <media@centreforeffectivealtruism.org>. Sky is incredibly kind and has always given me brilliant feedback when I’ve had to do external-facing stuff before (which I find intensely nerve-wracking and have to do quite a lot of ). I can’t speak more highly of their help. The people who have given me feedback on my sharing your video have mainly done so in the form of WOW face reactions on Slack FWIW- I’ll let you know if I get anything else. My husband also loves this type of stuff and was psyched to watch your video too. :)
Thanks a lot! This is definitely going to be helpful :)
FWIW, I think it’s almost inevitable that a script of roughly the length you had will initially contain some minor errors. I don’t think you should feel bad about that or like it’s specific to you or something. I think it just means it’s good to get feedback :)
(And I think these minor errors in this case didn’t like create notably downside risk or something, and it is still a good video.)
Thank you a lot for this feedback. I was perceiving that the main worry people here were having about Rational Animations was potential downsides, and that they were having this feeling mainly due to our artistic choices (because I was getting few upvotes, some downvotes and feedback that didn’t feel very “central” I figured that there was probably something about the whole thing that was putting off some people but that they weren’t able to put their finger on. Am I correct in this assessment?). I feel like I want to kind of update the direction of the channel because of this, but I need some time to reflect and maybe write a post to understand if what I’m thinking is what people here are also thinking.
Also, stepping back, I think you mainly need to answer two questions, which suggest different types of required data, neither of which is karma on the Forum/LessWrong:
Will it be net positive for these videos to be widely viewed by people who aren’t (yet) highly engaged in the EA/rationality communities? How positive? Can the upsides of that be increased or the downsides decreased?
Will these videos be widely viewed by people who aren’t (yet) highly engaged in the EA/rationality communities?
I think Q1 is best answered through actively soliciting feedback on video ideas, scripts, rough cuts, etc. from specific people or groups who are unusually likely to have good judgement on such things. This could be people who’ve done somewhat similar projects like Rob Miles, could be engaged EAs who know about whatever topic you’re covering in a given vid, could be non-EAs who know about whatever topic you’re covering in a vid, or groups in which some of those types people can be found (e.g., the Slack I made).
I think Q2 is best answered by the number of views your videos to date have gotten, the likes vs dislikes, the comments on YouTube, etc.
I think Forum/LessWrong karma does serve as weak evidence on both questions, but only weak evidence. Karma is a very noisy and coarse-grained metric. (So I don’t think getting low karma is a very bad sign, and I think there are better things to be looking at.)
This is very very helpful feedback, thank you for taking the time to give it (here and on the other post). Also, I’m way less anxious getting feedback like this than trying to hopelessly gauge things by upvotes and downvotes. I think I need to talk more to individual EAs and engage more with comments/express my doubts more like I’m doing now. My initial instinct was to run away (post/interact less), but this feels much better other than being more helpful.
Yeah, I think it is worth often posting, but that this is only partly because you get feedback and more so for other reasons (e.g., making it more likely that people will find your post/videos if it’s relevant to them, via seeing it on the home page, finding it in a search, or tags). And comments are generally more useful as feedback than karma, and comments from specific people asked in places outside the Forum are generally more useful as feedback than Forum comments.
(See also Reasons for and against posting on the EA Forum.)
I think maybe it’s more like:
The category “work that’s for large, mostly-non-EA audiences” can be prone to downside risks, so it’s worth thinking about them at least briefly when thinking about something from that category
Making videos hasn’t been tried much by EAs (excluding things like just videos of presentations / Q&As), so people have less of an idea of how to weigh up the upsides and downsides for that type of thing
And this is even more true for animated videos with a substantial sense of humour / substantial “quirkiness”
So some people are sort-of reflexively worried or unsure what to think
I think that this makes not upvoting a pretty reasonable choice for someone who just sees the post/video and doesn’t take the time to think about the pros and cons; maybe they want to neither encourage nor discourage something that’s unfamiliar to them and that they don’t know the implications of. I wouldn’t endorse the downvotes myself, but I guess maybe they’re based on something similar but with a larger degree of worry.
Or maybe some people just personally don’t enjoy the style/videos, separate from their beliefs about whether the channel should exist.
I wouldn’t guess that lots of people are actively opposed to specific artistic choices. Though I could of course be wrong.
Thanks! Just updated the link :)
I should also mention that Toby Ord’s 1⁄6 (17ish%) figure is for the chance of extinction this century, which isn’t made totally clear in the video (although I appreciate not much can be done about that)!
Updated the description with “Clarification: Toby Ord estimates that the chance of human extinction this century (and not in general) is around 1⁄6, which is 16.66...%”
Actually, that’s his estimate of existential catastrophe this century, which includes both extinction and unrecoverable collapse or unrecoverable dystopia. (I seem to recall that Ord also happens to think extinction is substantially more likely this century than the other two types of existential catastrophe, but I can’t remember for sure, and in any case his 1⁄6 estimate is stated as being about existential catastrophe.)
Maybe you could say: “Clarification: Toby Ord estimates that the chance of an existential catastrophe (not just extinction) this century (not across all time) is around 1⁄6; his estimate for extinction only might be lower, and his estimate across all time is higher. For more on the concept of existential catastrophe, see https://www.existential-risk.org/concept.pdf″
(Or you could use a more accessible link, e.g. https://forum.effectivealtruism.org/tag/existential-catastrophe-1 )
Copy-pasted your text and included both links with an “and”. Also… any feedback on the video? This post’s performance seems a little disappointing to me, as always.
Edit: oh I just see now that you replied above
(Thanks for being receptive to and acting on this and Fin’s feedback!)
Thanks, good catch! Possibly should have read over my reply...
Thanks for making both those updates :)