First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn’t thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn’t add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/namespace collision thing? There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there’s longtermism as the moral philosophy of future people matter a lot.
I saw Matt’s point as saying that the “longtermism” group, doesn’t actually need to have much to do with the longtermism philosophy, and that thus it’s weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didn’t see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here
Interesting—When I think of the group of people “longtermists” I think of the set of people who subscribe to (and self-identify with) some moral view that’s basically “longtermism,” not people who work on reducing existential risks. While there’s a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as “longtermists” is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a “longtermist.”
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that “longtermist EA” is a term used to encompass holy shit x risk EA too
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/namespace collision thing? There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there’s longtermism as the moral philosophy of future people matter a lot.
I saw Matt’s point as saying that the “longtermism” group, doesn’t actually need to have much to do with the longtermism philosophy, and that thus it’s weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didn’t see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
Interesting—When I think of the group of people “longtermists” I think of the set of people who subscribe to (and self-identify with) some moral view that’s basically “longtermism,” not people who work on reducing existential risks. While there’s a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as “longtermists” is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a “longtermist.”
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that “longtermist EA” is a term used to encompass holy shit x risk EA too