My biggest takeaway from the comments so far is that many/âmost of the commenters donât care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before â I never really thought that would be the response.
I guess itâs fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like itâs a novel idea. I care about âtruth in advertisingâ even as I also care about whether something is a good idea or not.
I think the existential risk/âglobal catastrophic risk work that longtermism-under-the-name-âlongtermismâ builds on is overall good and important, and most likely quite actionable (e.g. detect those asteroids, NASA!), even though there may be major errors in it, as well as other flaws and problems, such as a lot of weird and farfetched stuff in Nick Bostromâs work in particular. (I think the idea in Bostromâs original 2002 paper that the universe is a simulation that might get shut down is roughly on par with supernatural ideas about the apocalypse like the Rapture or RagnarĂśk. I think itâs strange to find it in whatâs trying to be serious scholarship, and it makes that scholarship less serious.)
The fundamental point about risk is quite simple and intuitive: 1) humans are biased toward ignoring low-probability events that could have huge consequences and 2) when thinking about such events, including those that could end the world, we should think not just about the people alive today, the world today, but the consequences for the world for the rest of time and all future generations.
Thatâs a nearly perfect argument! Itâs also something you can explain in under a minute to anyone, and theyâll intuitively get it, and probably agree or at least be sympathetic.
As I recall, when NASA did a survey of or public consultation with the American public, the publicâs desire for NASA to work on asteroid defense was overwhelmingly higher than NASA expected. I think this is good evidence that the general public finds arguments of this form persuasive and intuitive. And I believe when NASA learned that the public cared so much, that led NASA to prioritize asteroid defense much more than it had been previously.
I donât have any data on this right now (I could look it up), but to the extent that people â especially outside the U.S. â havenât turned covid-19 into a politically polarized, partisan issue and havenât bought into conspiracy theories or pseudoscience/ânon-credible science, I imagine that, among people who thought the virus was real, the threat was real, and the alarmed response was appropriate, there would be strong support for pandemic preparedness. This isnât rocket science â or, with asteroid defense, it literally is, but understanding why we want to launch the rockets isnât rocket science.
I think the fact that the term didnât add anything new is very bad because it came with a great cost. When you create a new set of jargon for an old idea you look naive and self-important. The EA community could have simply used framing that people already agreed with, instead they created a new term and field that we had to sell people on.
I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term âlongtermismâ was coined.
I think, independently from anything to do with the term âlongtermismâ, there is plenty you could criticize in Bostromâs work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.
But thatâs a point about Bostromâs work that long predates the term âlongtermismâ, not a point about whether coining and promoting that term was a good idea or not.
My biggest takeaway from the comments so far is that many/âmost of the commenters donât care whether longtermism is a novel idea, or at least care about that much less than I do. I never really thought about that before â I never really thought that would be the response.
I guess itâs fine to not care about that. The novelty (or lack thereof) of longtermism matters to me because it sure seems like a lot of people in EA have been talking and acting like itâs a novel idea. I care about âtruth in advertisingâ even as I also care about whether something is a good idea or not.
I think the existential risk/âglobal catastrophic risk work that longtermism-under-the-name-âlongtermismâ builds on is overall good and important, and most likely quite actionable (e.g. detect those asteroids, NASA!), even though there may be major errors in it, as well as other flaws and problems, such as a lot of weird and farfetched stuff in Nick Bostromâs work in particular. (I think the idea in Bostromâs original 2002 paper that the universe is a simulation that might get shut down is roughly on par with supernatural ideas about the apocalypse like the Rapture or RagnarĂśk. I think itâs strange to find it in whatâs trying to be serious scholarship, and it makes that scholarship less serious.)
The fundamental point about risk is quite simple and intuitive: 1) humans are biased toward ignoring low-probability events that could have huge consequences and 2) when thinking about such events, including those that could end the world, we should think not just about the people alive today, the world today, but the consequences for the world for the rest of time and all future generations.
Thatâs a nearly perfect argument! Itâs also something you can explain in under a minute to anyone, and theyâll intuitively get it, and probably agree or at least be sympathetic.
As I recall, when NASA did a survey of or public consultation with the American public, the publicâs desire for NASA to work on asteroid defense was overwhelmingly higher than NASA expected. I think this is good evidence that the general public finds arguments of this form persuasive and intuitive. And I believe when NASA learned that the public cared so much, that led NASA to prioritize asteroid defense much more than it had been previously.
I donât have any data on this right now (I could look it up), but to the extent that people â especially outside the U.S. â havenât turned covid-19 into a politically polarized, partisan issue and havenât bought into conspiracy theories or pseudoscience/ânon-credible science, I imagine that, among people who thought the virus was real, the threat was real, and the alarmed response was appropriate, there would be strong support for pandemic preparedness. This isnât rocket science â or, with asteroid defense, it literally is, but understanding why we want to launch the rockets isnât rocket science.
I think the fact that the term didnât add anything new is very bad because it came with a great cost. When you create a new set of jargon for an old idea you look naive and self-important. The EA community could have simply used framing that people already agreed with, instead they created a new term and field that we had to sell people on.
Discussions of âthe loss of potential human lives in our own galactic supercluster is at least ~1046 per century of delayed colonizationâ were elaborate and off-putting, when their only conclusions were the same old obvious idea that we should prevent pandemics, nuclear war and SkyNet (The idea of humans not becoming extinct goes back at least to discussions of nuclear apocalypse in the 40s, Terminator came out in 1984).
I agree with your first paragraph (and I think we probably agree on a lot!), but in your second paragraph, you link to a Nick Bostrom paper from 2003, which is 14 years before the term âlongtermismâ was coined.
I think, independently from anything to do with the term âlongtermismâ, there is plenty you could criticize in Bostromâs work, such as being overly complicated or outlandish, despite there being a core of truth in there somewhere.
But thatâs a point about Bostromâs work that long predates the term âlongtermismâ, not a point about whether coining and promoting that term was a good idea or not.