In a misleading editorial about longtermism, Phil Torres makes this claim. What is he talking about?
Also, what happened to the guy who wrote “Morality, Foresight, and Human Flourishing: An Introduction to Existential Risks” that turned him into an enemy of EA?
Searching online, I believe he gave the talk at EA Summit 2013, back when EA community-building was much more volunteer-based and didn’t have much in the way of formal organization.
As for Torres, my secondhand impression was a combination of a) believing that EA-types don’t have social justice-style concerns enough weight compared to the overwhelming importance of the far future, and b) personally feeling upset/jilted that he was rejected from a number of EA jobs.
He also gave a talk at the EA Summit 2014
That feelsveryuncharitable.I understand you probably have insider knowledge, but in the linked article he mentions strong disagreements with ideas like:Which it’s something I see many people can have problems with.There are plenty of people that don’t like (parts of) longtermism for reasons similar to those in the article, and I don’t think most of them are bitter because they have been rejected from EA jobs or are SJW.Edit: since this is getting a lot of downvotes, I just want to clarify that I do think that quote is a strawman of some longermist ideas. But I do think we should be charitable of critics’ motivations and at least mention the ones they would agree with.
Edit2: Reading through https://forum.effectivealtruism.org/search?terms=torres , it seems there is indeed some extra information and a lot of prior history about Torres’ motivations.
Especially in light of that my tone should have been way less strong. I should have written something like this: https://forum.effectivealtruism.org/posts/xtKRPkoMSLTiPNXhM/response-to-phil-torres-the-case-against-longtermism?commentId=LQhs9jJ3qx7x6Gfiv
https://forum.effectivealtruism.org/posts/xtKRPkoMSLTiPNXhM/response-to-phil-torres-the-case-against-longtermism?commentId=YSRyHbA2vmwMu9ZKo
OP asked a question about Torres specifically. I gave them my personal subjective impression of the best account I have about Torres’ motivations. I’m not going to add a “and criticizing EA is often a virtuous activity and we can learn a lot from our critics and some of our critics may well be pure in heart and soul even if this particular one may not be” caveat to every one of my comments discussing specific criticisms of EA.
Phil isn’t an unknown internet critic whose motivations are opaque; he is/was a well known person whose motivations and behaviour are known first-hand by many in the community. Perhaps other people have other motivations for disliking longtermism, but the question OP asked was about Phil specifically, and Linch gave the Phil specific answer.
Yeah, but who is speaking here? Beckstead? I don’t know any “Beckstead”s. Phil Torres is claiming that The Longtermist Stance is “we should prioritise the lives of people in rich countries over those in poor countries”, even though I’ve never heard EAs say that. At most Beckstead thinks so, though that’s not what Beckstead said. What Beckstead said was provisional (“now seems more plausible to me”) and not a call to action. Torres is trying to drag down discourse by killing nuance and saying misleading things.
Torres’ article is filled with misleading statements, and I have made longer and stronger remarks about it here. (Even so I’m upvoting you, because −6 is too harsh IMO)
Yes the article is indeed full of strawmen and misleading statements.
But (not knowing anything about Torres) I felt the top comment was strongly violating the principle of charity when trying to understand the author’s motivations.
I think the principle of charity is very important (especially when posting on a public forum), and saying that someone’s true motivations are not the ones they claim should require extraordinary proof (which maybe is the case! I don’t know anything about the history of this particular case).
Extraordinary proof? This seems too high to me. You need to strike the right balance between diagnosing dishonesty when it doesn’t exist and failing to diagnose it when it does. Both types of errors have serious costs. Given the relatively high prevalence of deception among humans (see e.g. this book), I would be very surprised if requiring “extraordinary proof” of dishonesty produced the best consequences on balance.
Summary: Peter Thiel spoke at the EA Summit conferences organized in both 2013 and 2014 but was the singular keynote speaker at the 2014 conference. The other affiliation Thiel had with EA at the time was through Leverage Research and the Machine Intelligence Research Institute, two EA-affiliated non-profit organizations to which Thiel had been a major donor for multiple years. Leverage Research was one of the main organizations sponsoring the 2013 and 2014 conferences. Thiel has not donated to MIRI since at least 2015 due to a difference of perspective on the likely impact of advanced AI on the long-term future. Thiel is presently still a major donor to Leverage Research but that organization itself has not self-identified as an EA-aligned organization since at least 2018⁄19.
You are correct from your other comment that Peter Thiel was only one of multiple keynote speakers at the 2013 Effective Altruism Summit. He was the keynote speaker at the 2014 EA Summit.
The “EA Summits” were a series of EA conferences organized in 2013, 2014 and 2018. For the Summits in 2013 and 2014, multiple organizations were the sponsor for the event but the primary one was Leverage Research. Leverage was the only organization which organized and sponsored the EA Summit in 2018. The EA Global series of conferences was initiated as a more formal series of conferences managed by the Centre for Effective Altruism (CEA) as EA began growing more into a global movement in its own right.
From 2014 and before, Thiel’s relationship to EA was through him being a repeat, major donor to both Leverage and the Machine Intelligence Research Institute (MIRI). After 2014, any direct affiliation Thiel had with EA ceased as was initiated by himself. AI alignment was the cause in EA that was the primary attraction for Thiel. He ceased donating to MIRI after he came to disagree with the relatively pessimistic perspective on the impact advanced artificial intelligence would have on the long-term future.
Thiel has continued donating to Leverage as an organization independent of EA after he otherwise ceased donating to any other EA-affiliated organizations. Leverage Research explicitly stopped self-identifying as an EA-aligned organization from 2018⁄19 onward. Geoff Anders, the founder and executive director of Leverage Research, is on the record as Peter Thiel still being a major, repeat donor to the organization as of 2021⁄22.