But after reading Will MacAskill’s book “What We Owe The Future” and the surge of media coverage it generated, I think I’ve talked myself into my own corner of semi-confusion over the use of the name “longtermist” to describe concerns related to advances in artificial intelligence. Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term. And it’s actually quite confusing to portray (as I have previously) their main message in terms of philosophical claims about time horizons. The claim they’re making is that there is a significant chance that current AI research programs will lead to human extinction within the next 20 to 40 years. That’s a very controversial claim to make. But appending “and we should try really hard to stop that” doesn’t make the claim more controversial.
This paragraph in the introduction to that post seems to answer the question in the post, and concisely argue that “longtermism” as it manifests as x-risk is not actually related to the long-term? I don’t follow what bothers you about the post.
I am admittedly biased, because this is far and away the thing that most annoys me about longtermist EA marketing—caring about x-risk is completely common sense if you buy the weird empirical beliefs about AI x-risk, which have nothing to do with moral philosophy. But I thought he made the point coherently and well. So long as you’re happy with the (IMO correct) statement that “longtermism” in practice mostly manifests as working on x-risk.
EDIT: This paragraph is a concise, one sentence summary of the argument in the post.
In other words, there’s nothing philosophically controversial about the idea that averting likely near-term human extinction ought to be a high priority — the issue is a contentious empirical claim.
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn’t thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn’t add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
To clarify, I agree with you an Yglesias that most longtermists are working on things like preventing AI from causing human extinction only a few decades from now, meaning the work is also very important from a short-term perspective that doesn’t give weight to what happens after say, 2100. So I agree with you that “”longtermism” in practice mostly manifests as working on [reducing near-term] x-risk.”
To explain what bothered me about Yglesias’ post more clearly, let me first say that my answer to “What’s long-term about “longtermism”?” is the (my words:) “giving significant moral weight to the many potential beings that might come to exist over the course of the long-term future (trillions upon trillions of years)” part of longtermism. Since that “part” of longtermism actually is wholly what long-termism is, one could also just answer “longtermism is long-term”.
In other words, the question sounds similar to (though not exactly like) “What’s liberal about liberalism?” or “What’s colonial about colonialism?”
I therefore would expect a post with the title “What’s long-term about “longtermism”?” to explain that longtermism is a moral view that gives enough moral weight to the experiences of future beings that might come to exist such that the long-term future of life matters a lot in expectation given how long that future might be (trillions upon trillions of years) and how much space in the universe it might make use of (a huge number of resources beyond this pale blue dot).
But instead, Yglesias’ post points out that the interventions that people who care about beings in the long-term future think are most worthwhile often look like things that people who didn’t care about future generations would also think are important (if they held the same empirical beliefs about near-term AI x-risk, as some of them do).
And my reaction to that is, okay, yes Yglesias, I get it and agree, but you didn’t actually argue that longtermism isn’t “long term” like your title suggested you might. Longtermism absolutely is “long-term” (as I described above). The fact that some interventions favored by longtermists also look good from non-longtermist moral perspectives doesn’t change that.
Yglesias:
Because at the end of the day, the people who work in this field and who call themselves “longtermists” don’t seem to be motivated by any particularly unusual ideas about the long term.
This statement is a motte in that he says “any particularly unusual ideas about the long term” rather than “longtermism”.
(I think the vast majority of people care about future generations in some capacity, e.g. they care about their children and their friends’ children before the children are born. Where we draw the line between this and some form of “strong longtermism” that actually is “particularly unusual” is unclear to me. E.g. I think most people also actually care about their friends’ unborn children’s unborn children too, though people often don’t make this explicit so it’s unclear to me how unusual the longtermism moral view actually is.)
If we replace the “any particularly unusual ideas about the long term” with “longtermism” then Yglesias’ statement seems to become an easily-attackable bailey.
In particular, I would say that the statement seems false and uncharitable and unsubstantiated. Yglesias is making a generalization, and obviously it’s a generalization that’s true of some people working on reducing x-risks posed by AI, but I know it’s definitely not true of many others working on x-risks. E.g. There are definitely many self-described longtermists working on reducing AI x-risk who are in fact motivated by wanting to make sure that humanity doesn’t go extinct so that future people can come to exist.
While I’m not an AI alignment researcher, I’ve personally donated a substantial fraction of my earnings to people doing this work and do many things that fall in the movement building / field building category to try to get other people to work on reducing AI risk, and I can personally attest to the fact that I care a lot more about preventing extinction to ensure that future beings are able to come to exist and live great lives than I care about saving my own life and everyone I know and love today. It’s not that I don’t care about my own life and everyone else alive today—I do a tremendous amount—but rather that as Derrick Parfit says the worst part about everyone dying today would by far be the loss of all future value, not 8 billion humans lives being cut short.
The last thing that I’ll say in this comment is that I found the post via Yglesias’ Some thoughts on the FTX collapse post that Rob responded to in the OP. Here’s how Yglesias cited his “What’s long-term about “longtermism”?” in the FTX collapse post:
If you are tediously familiar with the details of EA institutions, I think you’ll see my list is closer to the priorities of Open Philanthropy (the Dustin Moskovitz / Cari Tuna EA funding vehicle) than to those of the FTX Future Fund. In part, that’s because as you can see in the name, SBF was very publicly affiliated with promoting the “longtermism” idea, which I find to be a little bit confused.
As I’ve explained at length in this comment, I think longtermism is not confused. Contra Yglesias (though again Yglesias doesn’t actually argue against the claim, which is what I found annoying), longtermism is in fact “long-term.”
Yglesias is actually the one who is confused, both in his failure to recognize that longtermism is in fact “long-term” and because he confuses/conflates the motivations of some people working on reducing near-term extinction risk from AI with “longtermism.”
Again: Longtermism is a moral view that emphasizes the importance of future generations throughout the long term future. People who favor this view (self-identified “longtermist” EAs) often end up favoring working on reducing the risk of near-term human extinction from AI. People who are only motivated by what happens in the near term may also view working on this problem to be important. But that does not mean that longtermism is not “long term”, because “the motivation of some people working on reducing near-term extinction risk from AI” is not “longtermism.”
I want to say “obviously!” to this (because that’s what I was thinking when I read Yglesias’ post late last night and which is why I was annoying by it), but I also recognize that EAs’ communications related to “longtermism” have been far from perfect and it’s not surprising that some smart people like Yglesias are confused.
In my view it probably would have been better to have and propagate a term for the general idea of “creating new happy beings is a morally good as opposed to morally neutral matter” rather than “longtermism,” and then we could just talk about the obvious fact that under this moral view it seems very important to not miss out on the opportunity to put the extremely large stock of resources available in our galaxy and beyond to use producing happy beings for trillions upon trillions of years to come, by e.g. allowing human extinction in the near term or otherwise not becoming grabby and enduring for a long time. But this would be the subject of another discussion.
Edited to add: Sorry this post is so long. Whenever I feel like I wasn’t understood in writing I have a tendency to want to write a lot more to overexplain my thoughts. In other words I’ve written absurdly long comments like this before in similar circumstances. Hopefully it wasn’t annoying to read it all. Obviously the time cost to me of writing it is much more than the time-cost to you or others for reading it, but I also I’m wary of putting out lengthy text for others to read where shorter text could have sufficed. I just know I have trouble keeping my comments concise under conditions like this and psychologically it was easier for me to just write everything out as I wrote it. (To share, I also think doing this generally isn’t a very good use of my time and I’d like to get better at not doing this, or at least not as often.)
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn’t thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn’t add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/namespace collision thing? There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there’s longtermism as the moral philosophy of future people matter a lot.
I saw Matt’s point as saying that the “longtermism” group, doesn’t actually need to have much to do with the longtermism philosophy, and that thus it’s weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didn’t see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here
Interesting—When I think of the group of people “longtermists” I think of the set of people who subscribe to (and self-identify with) some moral view that’s basically “longtermism,” not people who work on reducing existential risks. While there’s a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as “longtermists” is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a “longtermist.”
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that “longtermist EA” is a term used to encompass holy shit x risk EA too
This paragraph in the introduction to that post seems to answer the question in the post, and concisely argue that “longtermism” as it manifests as x-risk is not actually related to the long-term? I don’t follow what bothers you about the post.
I am admittedly biased, because this is far and away the thing that most annoys me about longtermist EA marketing—caring about x-risk is completely common sense if you buy the weird empirical beliefs about AI x-risk, which have nothing to do with moral philosophy. But I thought he made the point coherently and well. So long as you’re happy with the (IMO correct) statement that “longtermism” in practice mostly manifests as working on x-risk.
EDIT: This paragraph is a concise, one sentence summary of the argument in the post.
Thanks for the reply, Neel.
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasn’t thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didn’t add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
To clarify, I agree with you an Yglesias that most longtermists are working on things like preventing AI from causing human extinction only a few decades from now, meaning the work is also very important from a short-term perspective that doesn’t give weight to what happens after say, 2100. So I agree with you that “”longtermism” in practice mostly manifests as working on [reducing near-term] x-risk.”
I also agree that there’s an annoying thing about “longtermist EA marketing” related to the above. (I liked your Simplify EA Pitches to “Holy Shit, X-Risk”.)
To explain what bothered me about Yglesias’ post more clearly, let me first say that my answer to “What’s long-term about “longtermism”?” is the (my words:) “giving significant moral weight to the many potential beings that might come to exist over the course of the long-term future (trillions upon trillions of years)” part of longtermism. Since that “part” of longtermism actually is wholly what long-termism is, one could also just answer “longtermism is long-term”.
In other words, the question sounds similar to (though not exactly like) “What’s liberal about liberalism?” or “What’s colonial about colonialism?”
I therefore would expect a post with the title “What’s long-term about “longtermism”?” to explain that longtermism is a moral view that gives enough moral weight to the experiences of future beings that might come to exist such that the long-term future of life matters a lot in expectation given how long that future might be (trillions upon trillions of years) and how much space in the universe it might make use of (a huge number of resources beyond this pale blue dot).
But instead, Yglesias’ post points out that the interventions that people who care about beings in the long-term future think are most worthwhile often look like things that people who didn’t care about future generations would also think are important (if they held the same empirical beliefs about near-term AI x-risk, as some of them do).
And my reaction to that is, okay, yes Yglesias, I get it and agree, but you didn’t actually argue that longtermism isn’t “long term” like your title suggested you might. Longtermism absolutely is “long-term” (as I described above). The fact that some interventions favored by longtermists also look good from non-longtermist moral perspectives doesn’t change that.
Yglesias:
This statement is a motte in that he says “any particularly unusual ideas about the long term” rather than “longtermism”.
(I think the vast majority of people care about future generations in some capacity, e.g. they care about their children and their friends’ children before the children are born. Where we draw the line between this and some form of “strong longtermism” that actually is “particularly unusual” is unclear to me. E.g. I think most people also actually care about their friends’ unborn children’s unborn children too, though people often don’t make this explicit so it’s unclear to me how unusual the longtermism moral view actually is.)
If we replace the “any particularly unusual ideas about the long term” with “longtermism” then Yglesias’ statement seems to become an easily-attackable bailey.
In particular, I would say that the statement seems false and uncharitable and unsubstantiated. Yglesias is making a generalization, and obviously it’s a generalization that’s true of some people working on reducing x-risks posed by AI, but I know it’s definitely not true of many others working on x-risks. E.g. There are definitely many self-described longtermists working on reducing AI x-risk who are in fact motivated by wanting to make sure that humanity doesn’t go extinct so that future people can come to exist.
While I’m not an AI alignment researcher, I’ve personally donated a substantial fraction of my earnings to people doing this work and do many things that fall in the movement building / field building category to try to get other people to work on reducing AI risk, and I can personally attest to the fact that I care a lot more about preventing extinction to ensure that future beings are able to come to exist and live great lives than I care about saving my own life and everyone I know and love today. It’s not that I don’t care about my own life and everyone else alive today—I do a tremendous amount—but rather that as Derrick Parfit says the worst part about everyone dying today would by far be the loss of all future value, not 8 billion humans lives being cut short.
I hope this clarifies my complaint about Yglesias’ What’s long-term about “longtermism”? post.
The last thing that I’ll say in this comment is that I found the post via Yglesias’ Some thoughts on the FTX collapse post that Rob responded to in the OP. Here’s how Yglesias cited his “What’s long-term about “longtermism”?” in the FTX collapse post:
As I’ve explained at length in this comment, I think longtermism is not confused. Contra Yglesias (though again Yglesias doesn’t actually argue against the claim, which is what I found annoying), longtermism is in fact “long-term.”
Yglesias is actually the one who is confused, both in his failure to recognize that longtermism is in fact “long-term” and because he confuses/conflates the motivations of some people working on reducing near-term extinction risk from AI with “longtermism.”
Again: Longtermism is a moral view that emphasizes the importance of future generations throughout the long term future. People who favor this view (self-identified “longtermist” EAs) often end up favoring working on reducing the risk of near-term human extinction from AI. People who are only motivated by what happens in the near term may also view working on this problem to be important. But that does not mean that longtermism is not “long term”, because “the motivation of some people working on reducing near-term extinction risk from AI” is not “longtermism.”
I want to say “obviously!” to this (because that’s what I was thinking when I read Yglesias’ post late last night and which is why I was annoying by it), but I also recognize that EAs’ communications related to “longtermism” have been far from perfect and it’s not surprising that some smart people like Yglesias are confused.
In my view it probably would have been better to have and propagate a term for the general idea of “creating new happy beings is a morally good as opposed to morally neutral matter” rather than “longtermism,” and then we could just talk about the obvious fact that under this moral view it seems very important to not miss out on the opportunity to put the extremely large stock of resources available in our galaxy and beyond to use producing happy beings for trillions upon trillions of years to come, by e.g. allowing human extinction in the near term or otherwise not becoming grabby and enduring for a long time. But this would be the subject of another discussion.
Edited to add: Sorry this post is so long. Whenever I feel like I wasn’t understood in writing I have a tendency to want to write a lot more to overexplain my thoughts. In other words I’ve written absurdly long comments like this before in similar circumstances. Hopefully it wasn’t annoying to read it all. Obviously the time cost to me of writing it is much more than the time-cost to you or others for reading it, but I also I’m wary of putting out lengthy text for others to read where shorter text could have sufficed. I just know I have trouble keeping my comments concise under conditions like this and psychologically it was easier for me to just write everything out as I wrote it. (To share, I also think doing this generally isn’t a very good use of my time and I’d like to get better at not doing this, or at least not as often.)
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/namespace collision thing? There’s “longtermism” as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and there’s longtermism as the moral philosophy of future people matter a lot.
I saw Matt’s point as saying that the “longtermism” group, doesn’t actually need to have much to do with the longtermism philosophy, and that thus it’s weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didn’t see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
Interesting—When I think of the group of people “longtermists” I think of the set of people who subscribe to (and self-identify with) some moral view that’s basically “longtermism,” not people who work on reducing existential risks. While there’s a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as “longtermists” is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a “longtermist.”
But maybe people have started using the word “longtermist” in this way and that’s the reason Yglesias’ worded his post as he did? (I haven’t observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that “longtermist EA” is a term used to encompass holy shit x risk EA too