I just went down a medium-size Matthew Yglesiasā Substack-posts-related-to-EA/ālongtermism rabbit hole and have to say Iām extremely disappointed by the quality of his posts.
I canāt comment on them directly to give him feedback because Iām not a subscriber, so Iām sharing my reaction here instead.
But after reading Will MacAskillās book āWhat We Owe The Futureā and the surge of media coverage it generated, I think Iāve talked myself into my own corner of semi-confusion over the use of the name ālongtermistā to describe concerns related to advances in artificial intelligence. Because at the end of the day, the people who work in this field and who call themselves ālongtermistsā donāt seem to be motivated by any particularly unusual ideas about the long term. And itās actually quite confusing to portray (as I have previously) their main message in terms of philosophical claims about time horizons. The claim theyāre making is that there is a significant chance that current AI research programs will lead to human extinction within the next 20 to 40 years. Thatās a very controversial claim to make. But appending āand we should try really hard to stop thatā doesnāt make the claim more controversial.
This paragraph in the introduction to that post seems to answer the question in the post, and concisely argue that ālongtermismā as it manifests as x-risk is not actually related to the long-term? I donāt follow what bothers you about the post.
I am admittedly biased, because this is far and away the thing that most annoys me about longtermist EA marketingācaring about x-risk is completely common sense if you buy the weird empirical beliefs about AI x-risk, which have nothing to do with moral philosophy. But I thought he made the point coherently and well. So long as youāre happy with the (IMO correct) statement that ālongtermismā in practice mostly manifests as working on x-risk.
EDIT: This paragraph is a concise, one sentence summary of the argument in the post.
In other words, thereās nothing philosophically controversial about the idea that averting likely near-term human extinction ought to be a high priority ā the issue is a contentious empirical claim.
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasnāt thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didnāt add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
To clarify, I agree with you an Yglesias that most longtermists are working on things like preventing AI from causing human extinction only a few decades from now, meaning the work is also very important from a short-term perspective that doesnāt give weight to what happens after say, 2100. So I agree with you that āālongtermismā in practice mostly manifests as working on [reducing near-term] x-risk.ā
To explain what bothered me about Yglesiasā post more clearly, let me first say that my answer to āWhatās long-term about ālongtermismā?ā is the (my words:) āgiving significant moral weight to the many potential beings that might come to exist over the course of the long-term future (trillions upon trillions of years)ā part of longtermism. Since that āpartā of longtermism actually is wholly what long-termism is, one could also just answer ālongtermism is long-termā.
In other words, the question sounds similar to (though not exactly like) āWhatās liberal about liberalism?ā or āWhatās colonial about colonialism?ā
I therefore would expect a post with the title āWhatās long-term about ālongtermismā?ā to explain that longtermism is a moral view that gives enough moral weight to the experiences of future beings that might come to exist such that the long-term future of life matters a lot in expectation given how long that future might be (trillions upon trillions of years) and how much space in the universe it might make use of (a huge number of resources beyond this pale blue dot).
But instead, Yglesiasā post points out that the interventions that people who care about beings in the long-term future think are most worthwhile often look like things that people who didnāt care about future generations would also think are important (if they held the same empirical beliefs about near-term AI x-risk, as some of them do).
And my reaction to that is, okay, yes Yglesias, I get it and agree, but you didnāt actually argue that longtermism isnāt ālong termā like your title suggested you might. Longtermism absolutely is ālong-termā (as I described above). The fact that some interventions favored by longtermists also look good from non-longtermist moral perspectives doesnāt change that.
Yglesias:
Because at the end of the day, the people who work in this field and who call themselves ālongtermistsā donāt seem to be motivated by any particularly unusual ideas about the long term.
This statement is a motte in that he says āany particularly unusual ideas about the long termā rather than ālongtermismā.
(I think the vast majority of people care about future generations in some capacity, e.g. they care about their children and their friendsā children before the children are born. Where we draw the line between this and some form of āstrong longtermismā that actually is āparticularly unusualā is unclear to me. E.g. I think most people also actually care about their friendsā unborn childrenās unborn children too, though people often donāt make this explicit so itās unclear to me how unusual the longtermism moral view actually is.)
If we replace the āany particularly unusual ideas about the long termā with ālongtermismā then Yglesiasā statement seems to become an easily-attackable bailey.
In particular, I would say that the statement seems false and uncharitable and unsubstantiated. Yglesias is making a generalization, and obviously itās a generalization thatās true of some people working on reducing x-risks posed by AI, but I know itās definitely not true of many others working on x-risks. E.g. There are definitely many self-described longtermists working on reducing AI x-risk who are in fact motivated by wanting to make sure that humanity doesnāt go extinct so that future people can come to exist.
While Iām not an AI alignment researcher, Iāve personally donated a substantial fraction of my earnings to people doing this work and do many things that fall in the movement building /ā field building category to try to get other people to work on reducing AI risk, and I can personally attest to the fact that I care a lot more about preventing extinction to ensure that future beings are able to come to exist and live great lives than I care about saving my own life and everyone I know and love today. Itās not that I donāt care about my own life and everyone else alive todayāI do a tremendous amountābut rather that as Derrick Parfit says the worst part about everyone dying today would by far be the loss of all future value, not 8 billion humans lives being cut short.
The last thing that Iāll say in this comment is that I found the post via Yglesiasā Some thoughts on the FTX collapse post that Rob responded to in the OP. Hereās how Yglesias cited his āWhatās long-term about ālongtermismā?ā in the FTX collapse post:
If you are tediously familiar with the details of EA institutions, I think youāll see my list is closer to the priorities of Open Philanthropy (the Dustin Moskovitz /ā Cari Tuna EA funding vehicle) than to those of the FTX Future Fund. In part, thatās because as you can see in the name, SBF was very publicly affiliated with promoting the ālongtermismā idea, which I find to be a little bit confused.
As Iāve explained at length in this comment, I think longtermism is not confused. Contra Yglesias (though again Yglesias doesnāt actually argue against the claim, which is what I found annoying), longtermism is in fact ālong-term.ā
Yglesias is actually the one who is confused, both in his failure to recognize that longtermism is in fact ālong-termā and because he confuses/āconflates the motivations of some people working on reducing near-term extinction risk from AI with ālongtermism.ā
Again: Longtermism is a moral view that emphasizes the importance of future generations throughout the long term future. People who favor this view (self-identified ālongtermistā EAs) often end up favoring working on reducing the risk of near-term human extinction from AI. People who are only motivated by what happens in the near term may also view working on this problem to be important. But that does not mean that longtermism is not ālong termā, because āthe motivation of some people working on reducing near-term extinction risk from AIā is not ālongtermism.ā
I want to say āobviously!ā to this (because thatās what I was thinking when I read Yglesiasā post late last night and which is why I was annoying by it), but I also recognize that EAsā communications related to ālongtermismā have been far from perfect and itās not surprising that some smart people like Yglesias are confused.
In my view it probably would have been better to have and propagate a term for the general idea of ācreating new happy beings is a morally good as opposed to morally neutral matterā rather than ālongtermism,ā and then we could just talk about the obvious fact that under this moral view it seems very important to not miss out on the opportunity to put the extremely large stock of resources available in our galaxy and beyond to use producing happy beings for trillions upon trillions of years to come, by e.g. allowing human extinction in the near term or otherwise not becoming grabby and enduring for a long time. But this would be the subject of another discussion.
Edited to add: Sorry this post is so long. Whenever I feel like I wasnāt understood in writing I have a tendency to want to write a lot more to overexplain my thoughts. In other words Iāve written absurdly long comments like this before in similar circumstances. Hopefully it wasnāt annoying to read it all. Obviously the time cost to me of writing it is much more than the time-cost to you or others for reading it, but I also Iām wary of putting out lengthy text for others to read where shorter text could have sufficed. I just know I have trouble keeping my comments concise under conditions like this and psychologically it was easier for me to just write everything out as I wrote it. (To share, I also think doing this generally isnāt a very good use of my time and Iād like to get better at not doing this, or at least not as often.)
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasnāt thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didnāt add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/ānamespace collision thing? Thereās ālongtermismā as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and thereās longtermism as the moral philosophy of future people matter a lot.
I saw Mattās point as saying that the ālongtermismā group, doesnāt actually need to have much to do with the longtermism philosophy, and that thus itās weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didnāt see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
Thereās ālongtermismā as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here
InterestingāWhen I think of the group of people ālongtermistsā I think of the set of people who subscribe to (and self-identify with) some moral view thatās basically ālongtermism,ā not people who work on reducing existential risks. While thereās a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as ālongtermistsā is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a ālongtermist.ā
But maybe people have started using the word ālongtermistā in this way and thatās the reason Yglesiasā worded his post as he did? (I havenāt observed this, but it sounds like you might have.)
But maybe people have started using the word ālongtermistā in this way and thatās the reason Yglesiasā worded his post as he did? (I havenāt observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that ālongtermist EAā is a term used to encompass holy shit x risk EA too
Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBFās view then cites the 80k podcast as supporting this mistaken view when in fact it does not. Thatās just bad journalism.
Until very recently, for example, I thought I had an unpublishable, off-the-record scoop about his weird idea that someone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 odds.
Thereās no way that is or ever has been SBFās view. I donāt buy it and think Yglesias is just misrepresenting SBFās view. Of course SBF wouldnāt be completely indifferent between keeping whatever his net worth was and taking a 50% chance of doubling it and a 50% chance of losing it all.
That I had this information made me nervous on behalf of people making plans based on his grants and his promises of money ā I didnāt realize this is actually something heās repeatedly said publicly and on the record.
Yglesias then links to the allegedly offending passage, but I have to say that the passage does not support Yglesiasā assertion than SBF is/āwas completely risk neutral about money. Choosing a 10% chance of $15 billion over a 100% chance of $1 billion is not risk neutral. It still allows for quite a bit of risk aversion.
I didnāt relisten to the full 80k interview to see if something SBF said does justify Yglesiasā assertion but from memory I feel quite sure it doesnāt exist.
It still doesnāt fully entail Mattās claim, but the content of the interview gets a lot closer than that description. You donāt need to give it a full listen, Iāve quoted the relevant part:
Thanks for finding and sharing that quote. I agree that it doesnāt fully entail Mattās claim, and would go further to say that it provides evidence against Mattās claim.
In particular, SBFās statement...
At what point are you out of ways for the world to spend money to change? [...] [I]tās unclear exactly what the answer is, but itās at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money.
⦠makes clear that SBF was not completely risk neutral.
At the end of the excerpt Rob says āSo you kind of want to just be risk neutral.ā To me the ākind ofā is important to understanding his meaning. Relative to the individual making the āgamble my $10 billion and either get $20 billion or $0, with equal probabilityā bet, for the altruistic actor itās āitās not so crazyā. Obviously itās still crazy, but also clearly Robās point that itās not as crazy as the madness of an individual doing this for their own self-interested gain is completely valid, given the difference in how steeply returns to spending diminish for a single individual versus all moral patients in the world (present and future) combined.
Yglesiasā statement that SBF thought āsomeone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 oddsā is clearly false, though only a few words different than SBFās agreement with Rob that an altruist doing this is ānot so crazyā as a person doing it for self-interested reasons. So I agree āthe content of the interview gets a lot closer than that description,ā but I also think Yglesias just did a bad job interpreting the interview. But who knows, maybe SBF misspoke to Yglesias in-person and most of the reason Yglesias had for believing SBF took that view was actually the words SBF spoke to him in person.
I just went down a medium-size Matthew Yglesiasā Substack-posts-related-to-EA/ālongtermism rabbit hole and have to say Iām extremely disappointed by the quality of his posts.
I canāt comment on them directly to give him feedback because Iām not a subscriber, so Iām sharing my reaction here instead.
e.g. This one has a click bait title and doesnāt answer the question in the post, nor argue that the titular question assumes a false premise, which makes the post super annoying: https://āāwww.slowboring.com/āāp/āāwhats-long-term-about-longtermism
This paragraph in the introduction to that post seems to answer the question in the post, and concisely argue that ālongtermismā as it manifests as x-risk is not actually related to the long-term? I donāt follow what bothers you about the post.
I am admittedly biased, because this is far and away the thing that most annoys me about longtermist EA marketingācaring about x-risk is completely common sense if you buy the weird empirical beliefs about AI x-risk, which have nothing to do with moral philosophy. But I thought he made the point coherently and well. So long as youāre happy with the (IMO correct) statement that ālongtermismā in practice mostly manifests as working on x-risk.
EDIT: This paragraph is a concise, one sentence summary of the argument in the post.
Thanks for the reply, Neel.
First I should note that I wrote my previous comment on my phone in the middle of the night when I should have been asleep long before, so I wasnāt thinking fully about how others would interpret my words. Seeing the reaction to it I see that the comment didnāt add value as written and I probably should just just waited to write it later when I could unambiguously communicate what bothered me about it at length (as I do in this comment).
To clarify, I agree with you an Yglesias that most longtermists are working on things like preventing AI from causing human extinction only a few decades from now, meaning the work is also very important from a short-term perspective that doesnāt give weight to what happens after say, 2100. So I agree with you that āālongtermismā in practice mostly manifests as working on [reducing near-term] x-risk.ā
I also agree that thereās an annoying thing about ālongtermist EA marketingā related to the above. (I liked your Simplify EA Pitches to āHoly Shit, X-Riskā.)
To explain what bothered me about Yglesiasā post more clearly, let me first say that my answer to āWhatās long-term about ālongtermismā?ā is the (my words:) āgiving significant moral weight to the many potential beings that might come to exist over the course of the long-term future (trillions upon trillions of years)ā part of longtermism. Since that āpartā of longtermism actually is wholly what long-termism is, one could also just answer ālongtermism is long-termā.
In other words, the question sounds similar to (though not exactly like) āWhatās liberal about liberalism?ā or āWhatās colonial about colonialism?ā
I therefore would expect a post with the title āWhatās long-term about ālongtermismā?ā to explain that longtermism is a moral view that gives enough moral weight to the experiences of future beings that might come to exist such that the long-term future of life matters a lot in expectation given how long that future might be (trillions upon trillions of years) and how much space in the universe it might make use of (a huge number of resources beyond this pale blue dot).
But instead, Yglesiasā post points out that the interventions that people who care about beings in the long-term future think are most worthwhile often look like things that people who didnāt care about future generations would also think are important (if they held the same empirical beliefs about near-term AI x-risk, as some of them do).
And my reaction to that is, okay, yes Yglesias, I get it and agree, but you didnāt actually argue that longtermism isnāt ālong termā like your title suggested you might. Longtermism absolutely is ālong-termā (as I described above). The fact that some interventions favored by longtermists also look good from non-longtermist moral perspectives doesnāt change that.
Yglesias:
This statement is a motte in that he says āany particularly unusual ideas about the long termā rather than ālongtermismā.
(I think the vast majority of people care about future generations in some capacity, e.g. they care about their children and their friendsā children before the children are born. Where we draw the line between this and some form of āstrong longtermismā that actually is āparticularly unusualā is unclear to me. E.g. I think most people also actually care about their friendsā unborn childrenās unborn children too, though people often donāt make this explicit so itās unclear to me how unusual the longtermism moral view actually is.)
If we replace the āany particularly unusual ideas about the long termā with ālongtermismā then Yglesiasā statement seems to become an easily-attackable bailey.
In particular, I would say that the statement seems false and uncharitable and unsubstantiated. Yglesias is making a generalization, and obviously itās a generalization thatās true of some people working on reducing x-risks posed by AI, but I know itās definitely not true of many others working on x-risks. E.g. There are definitely many self-described longtermists working on reducing AI x-risk who are in fact motivated by wanting to make sure that humanity doesnāt go extinct so that future people can come to exist.
While Iām not an AI alignment researcher, Iāve personally donated a substantial fraction of my earnings to people doing this work and do many things that fall in the movement building /ā field building category to try to get other people to work on reducing AI risk, and I can personally attest to the fact that I care a lot more about preventing extinction to ensure that future beings are able to come to exist and live great lives than I care about saving my own life and everyone I know and love today. Itās not that I donāt care about my own life and everyone else alive todayāI do a tremendous amountābut rather that as Derrick Parfit says the worst part about everyone dying today would by far be the loss of all future value, not 8 billion humans lives being cut short.
I hope this clarifies my complaint about Yglesiasā Whatās long-term about ālongtermismā? post.
The last thing that Iāll say in this comment is that I found the post via Yglesiasā Some thoughts on the FTX collapse post that Rob responded to in the OP. Hereās how Yglesias cited his āWhatās long-term about ālongtermismā?ā in the FTX collapse post:
As Iāve explained at length in this comment, I think longtermism is not confused. Contra Yglesias (though again Yglesias doesnāt actually argue against the claim, which is what I found annoying), longtermism is in fact ālong-term.ā
Yglesias is actually the one who is confused, both in his failure to recognize that longtermism is in fact ālong-termā and because he confuses/āconflates the motivations of some people working on reducing near-term extinction risk from AI with ālongtermism.ā
Again: Longtermism is a moral view that emphasizes the importance of future generations throughout the long term future. People who favor this view (self-identified ālongtermistā EAs) often end up favoring working on reducing the risk of near-term human extinction from AI. People who are only motivated by what happens in the near term may also view working on this problem to be important. But that does not mean that longtermism is not ālong termā, because āthe motivation of some people working on reducing near-term extinction risk from AIā is not ālongtermism.ā
I want to say āobviously!ā to this (because thatās what I was thinking when I read Yglesiasā post late last night and which is why I was annoying by it), but I also recognize that EAsā communications related to ālongtermismā have been far from perfect and itās not surprising that some smart people like Yglesias are confused.
In my view it probably would have been better to have and propagate a term for the general idea of ācreating new happy beings is a morally good as opposed to morally neutral matterā rather than ālongtermism,ā and then we could just talk about the obvious fact that under this moral view it seems very important to not miss out on the opportunity to put the extremely large stock of resources available in our galaxy and beyond to use producing happy beings for trillions upon trillions of years to come, by e.g. allowing human extinction in the near term or otherwise not becoming grabby and enduring for a long time. But this would be the subject of another discussion.
Edited to add: Sorry this post is so long. Whenever I feel like I wasnāt understood in writing I have a tendency to want to write a lot more to overexplain my thoughts. In other words Iāve written absurdly long comments like this before in similar circumstances. Hopefully it wasnāt annoying to read it all. Obviously the time cost to me of writing it is much more than the time-cost to you or others for reading it, but I also Iām wary of putting out lengthy text for others to read where shorter text could have sufficed. I just know I have trouble keeping my comments concise under conditions like this and psychologically it was easier for me to just write everything out as I wrote it. (To share, I also think doing this generally isnāt a very good use of my time and Iād like to get better at not doing this, or at least not as often.)
No worries! I appreciate the context and totally relate :) (and relate with the desire to write a lot of things to clear up a confusion!)
For your general point, I would guess this is mostly a semantic/ānamespace collision thing? Thereās ālongtermismā as the group of people who talk a lot about x-risk, AI safety and pandemics because they hold some weird beliefs here, and thereās longtermism as the moral philosophy of future people matter a lot.
I saw Mattās point as saying that the ālongtermismā group, doesnāt actually need to have much to do with the longtermism philosophy, and that thus itās weird that they call themselves longtermists. Because they are basically the only people working on AI X-risk and thus are the group associated with that worldview, and try hard to promote it. Even though this is really an empirical belief and not much to do with their longtermism.
I mostly didnāt see his post as an attack or comment on the philosophical movement of longtermism.
But yeah, overall I would guess that we mostly just agree here?
InterestingāWhen I think of the group of people ālongtermistsā I think of the set of people who subscribe to (and self-identify with) some moral view thatās basically ālongtermism,ā not people who work on reducing existential risks. While thereās a big overlap between these two sets of people, I think referring to e.g. people who reject caring about future people as ālongtermistsā is pretty absurd, even if such people also hold the weird empirical beliefs about AI (or bioengineered pandemics, etc) posing a huge near-term extinction risk. Caring about AI x-risk or thinking the x-risk from AI is large is simply not the thing that makes a person a ālongtermist.ā
But maybe people have started using the word ālongtermistā in this way and thatās the reason Yglesiasā worded his post as he did? (I havenāt observed this, but it sounds like you might have.)
Yeah this feels like the crux, my read is that ālongtermist EAā is a term used to encompass holy shit x risk EA too
Also in the Yglesias post Rob wrote the OP in response to, Yglesias misrepresents SBFās view then cites the 80k podcast as supporting this mistaken view when in fact it does not. Thatās just bad journalism.
Thereās no way that is or ever has been SBFās view. I donāt buy it and think Yglesias is just misrepresenting SBFās view. Of course SBF wouldnāt be completely indifferent between keeping whatever his net worth was and taking a 50% chance of doubling it and a 50% chance of losing it all.
Yglesias then links to the allegedly offending passage, but I have to say that the passage does not support Yglesiasā assertion than SBF is/āwas completely risk neutral about money. Choosing a 10% chance of $15 billion over a 100% chance of $1 billion is not risk neutral. It still allows for quite a bit of risk aversion.
I didnāt relisten to the full 80k interview to see if something SBF said does justify Yglesiasā assertion but from memory I feel quite sure it doesnāt exist.
It still doesnāt fully entail Mattās claim, but the content of the interview gets a lot closer than that description. You donāt need to give it a full listen, Iāve quoted the relevant part:
https://āāforum.effectivealtruism.org/āāposts/āāTHgezaPxhvoizkRFy/āāclarifications-on-diminishing-returns-and-risk-aversion-in?commentId=ppyzWLuhkuRJCifsx
Thanks for finding and sharing that quote. I agree that it doesnāt fully entail Mattās claim, and would go further to say that it provides evidence against Mattās claim.
In particular, SBFās statement...
⦠makes clear that SBF was not completely risk neutral.
At the end of the excerpt Rob says āSo you kind of want to just be risk neutral.ā To me the ākind ofā is important to understanding his meaning. Relative to the individual making the āgamble my $10 billion and either get $20 billion or $0, with equal probabilityā bet, for the altruistic actor itās āitās not so crazyā. Obviously itās still crazy, but also clearly Robās point that itās not as crazy as the madness of an individual doing this for their own self-interested gain is completely valid, given the difference in how steeply returns to spending diminish for a single individual versus all moral patients in the world (present and future) combined.
Yglesiasā statement that SBF thought āsomeone with his level of wealth should be indifferent between the status quo and a double-or-nothing bet with 50:50 oddsā is clearly false, though only a few words different than SBFās agreement with Rob that an altruist doing this is ānot so crazyā as a person doing it for self-interested reasons. So I agree āthe content of the interview gets a lot closer than that description,ā but I also think Yglesias just did a bad job interpreting the interview. But who knows, maybe SBF misspoke to Yglesias in-person and most of the reason Yglesias had for believing SBF took that view was actually the words SBF spoke to him in person.