It’s not her fault that she has finite time and wanted to keep the running length of the video down, maybe this stuff is more in the weeds and not accessible to a surface glance. Maybe it is Ord’s and MacAskill’s fault for not emphasizing that we’ve been struggling with this debate a lot (I accuse Ord of being too bullish on positive longtermism and not bullish enough on negative longtermism in my above-linked shortform).
off the top of my head not trying very hard, here are in-movement confrontations of that problem
https://forum.effectivealtruism.org/posts/wkWG4tYEgF6Ko49bH/don-t-leave-your-fingerprints-on-the-future
https://arbital.obormot.net/page/value_cosmopolitan.html
arguably this counts https://www.alignmentforum.org/posts/hvGoYXi2kgnS3vxqb/some-ai-research-areas-and-their-relevance-to-existential-1#Computational_Social_Choice__CSC_ (trying to get the AI community to use social choice theory). See the Baum publication, which cites the ancient Coherent Extrapolated Volition paper, which at length compares how jewish atheists (like the author) and islamists (as in “convert or die” extremists) each would reason about the alignment problem.
https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=pvXtqvGfjATkJq7N2
https://forum.effectivealtruism.org/topics/value-lock-in
It’s not her fault that she has finite time and wanted to keep the running length of the video down, maybe this stuff is more in the weeds and not accessible to a surface glance. Maybe it is Ord’s and MacAskill’s fault for not emphasizing that we’ve been struggling with this debate a lot (I accuse Ord of being too bullish on positive longtermism and not bullish enough on negative longtermism in my above-linked shortform).