‘3 Body Problem’ is a new 8-episode Netflix TV series that’s extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin.
It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.
Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?
PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read ‘3 Body Problem’ novel in 2015, we were invited to a conference on ‘active Messaging to Extraterrestrial Intelligence’ (‘active METI’) at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin’s book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:
Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There Universal Adaptations in Search, Aversion, and Signaling?
Abstract To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).
I completely agree Geoffrey! I originally read Liu Cixin’s series before I became involved in EA, and would highly recommend to anyone who’s reading this comment.
I think the series very much touches on themes similar in EA thought, such as existential risk, speciesism, and what it means to be moral.[1]
I think what makes Cixin’s work seem like it’s got EA themes is that a lot of the series challenges how humanity views its place in the universe, and it challenges many assumptions about both what the universe is, and our moral obligations to others in that universe, which is quite similar to how EA challenges ‘common-sense’ views of the world and moral obligation.
I haven’t seen the series, but am currently halfway through the second book.
I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?
But sure, if someone mentioned to me they watched and liked the series and they don’t know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.
I think there’s a huge difference in potential reach between a major TV series and a LessWrong post.
According to this summary from Financial Times, as of March 27, ‘3 Body Problem’ had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries.
Whereas a good LessWrong post might get 100 likes.
We should be more scope-sensitive about public impact!
I think I am misunderstanding the original question then?
I mean if you ask: “what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students”
then the reach is not the 10 million people watching the show, it’s the people you get a chance to speak to.
The book in my opinion is better, and relies so much on vast realizations and plot twists that it’s better to read it blind—before the series and before even the blurb at the back of the book! So for those who didn’t know it was a book, here it is: https://www.amazon.fr/Three-Body-Problem-Cixin-Liu/dp/0765377063
‘3 Body Problem’ is a new 8-episode Netflix TV series that’s extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin.
It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.
Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?
PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read ‘3 Body Problem’ novel in 2015, we were invited to a conference on ‘active Messaging to Extraterrestrial Intelligence’ (‘active METI’) at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin’s book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:
PDF here
Journal link here
Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?
Abstract
To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).
I completely agree Geoffrey! I originally read Liu Cixin’s series before I became involved in EA, and would highly recommend to anyone who’s reading this comment.
I think the series very much touches on themes similar in EA thought, such as existential risk, speciesism, and what it means to be moral.[1]
I think what makes Cixin’s work seem like it’s got EA themes is that a lot of the series challenges how humanity views its place in the universe, and it challenges many assumptions about both what the universe is, and our moral obligations to others in that universe, which is quite similar to how EA challenges ‘common-sense’ views of the world and moral obligation.
(I also referenced it in this reply to Matthew Barnett)
I haven’t seen the series, but am currently halfway through the second book.
I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?
But sure, if someone mentioned to me they watched and liked the series and they don’t know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.
I think there’s a huge difference in potential reach between a major TV series and a LessWrong post.
According to this summary from Financial Times, as of March 27, ‘3 Body Problem’ had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries.
Whereas a good LessWrong post might get 100 likes.
We should be more scope-sensitive about public impact!
I think I am misunderstanding the original question then?
I mean if you ask: “what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students”
then the reach is not the 10 million people watching the show, it’s the people you get a chance to speak to.
The book in my opinion is better, and relies so much on vast realizations and plot twists that it’s better to read it blind—before the series and before even the blurb at the back of the book! So for those who didn’t know it was a book, here it is: https://www.amazon.fr/Three-Body-Problem-Cixin-Liu/dp/0765377063
I didn’t know about this, now I think I have a new netflix shot to watch! thanks!
On the topic, I hear season 7, episode 5 of young sheldon is abput a dangerous AI. Edit: I watched the episode, it’s not.