Very interesting map. Lots of good information.
philosophytorres
Oh, I see. Did they not ask for his approval? I’m familiar with websites devising their own outrageously hyperbolic headlines for articles authored by others, but I genuinely assumed that a website as reputable as Slate would have asked a figure as prominent as Bostrom for approval. My apologies!
A fantastically interesting article. I wish I’d seen it earlier—about the time this was published (last February) I was completing an article on “agential risks” that ended up in the Journal of Evolution and Technology. In it, I distinguish between “existential risks” and “stagnation risks,” each of which corresponds to one of the disjuncts in Bostrom’s original definition. Since these have different implications—I argue—for understanding different kinds of agential risks, I think it would be good to standardize the nomenclature. Perhaps “population risks” and “quality risks” are preferable (although I’m not sure “quality risks” and “stagnations risks” have exactly the same extension). Thoughts?
(Btw, the JET article is here: http://jetpress.org/v26.2/torres.pdf.)
Friends: I recently wrote a few thousand words on the implications that a Trump presidency will have for global risk. I’m fairly new to this discussion group, so I hope posting the link doesn’t contravene any community norms. Really, I would eagerly welcome feedback on this. My prognosis is not good.
Wow, this is absolutely stunning. I can’t myself participate, but I genuinely hope this project takes off. I’m sure you’re familiar with the famous (but not demolished) Building 20 at MIT: https://en.wikipedia.org/wiki/Building_20. It provided a space for interdisciplinary work—and wow, the results were truly amazing.
As it happens, I found numerous cases of truly egregious cherry-picking, demonstrably false statements, and (no, I’m not kidding) out-of-context mined quotes in just a few pages of Pinker’s “Enlightenment Now.” Take a look for yourself. The terrible scholarship is shocking. https://docs.wixstatic.com/ugd/d9aaad_8b76c6c86f314d0288161ae8a47a9821.pdf
Sloppy scholarship. Please do take a look, if you have a moment: https://www.salon.com/2019/01/26/steven-pinkers-fake-enlightenment-his-book-is-full-of-misleading-claims-and-false-assertions/.
Virtually every point here misrepresents what I wrote. I commend your take-down of various straw men, but you really did miss the main thrust (and details) of the critique. I suspect that you would (notably) fail an Ideological Turing Test.
- 18 May 2021 9:31 UTC; 22 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
Your “steelmanning” is abysmal, in my opinion. It really doesn’t represent the substance of my criticisms. I will definitely be citing this post in a forthcoming journal paper on the issue.
You don’t even have the common courtesy of citing the original post so that people can decide for themselves whether you’ve accurately represented my arguments (you haven’t). This is very typical “authoritarian” (or controlling) EA behavior in my experience: rather than given critics an actual fair hearing, which would be the intellectually honest thing, you try to monopolize and control the narrative by not citing the original source, and then reformulating all the arguments while at the same time describing these reformulations as “steelmanned” versions (which some folks who give EA the benefit of the doubt might just accept), despite the fact that the original author (me) thinks you’ve done a truly abysmal job at accurately presenting the critique. As mentioned, this will definitely get cited in a forthcoming article; it really does embody much of what’s epistemically wrong with this community.
John: Do I have your permission to release screenshots of our exchange? You write: ”… including persistently sending me messages on Facebook.” I believe that this is very misleading.
- 12 May 2021 11:01 UTC; 84 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
Have you seen my papers on the topic, by chance? One is published in Inquiry, the other is forthcoming. Send me an email if you’d like!
One is here: https://docs.wixstatic.com/ugd/d9aaad_64ac5f0da7ea494ab48f54181b249ce4.pdf. And my critique of the radical utopianism and valuation of imaginary lives that undergirds the most prominent notion of “existential risk” today is here: https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_33466a921b2646a7a02482acb89b07b8.pdf
[Responding to Alex HT above:]
I’ll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: “Longtermism does not say our current world is replete with suffering and death.”
So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom’s work. (Just to clear one thing up, “longtermism” as espoused by “leading” longtermists today has been hugely influenced by Bostromism—this is a fact, I believe, about intellectual genealogy, which I’ll try to touch upon later.)
There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don’t know how anyone could deny it.
Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of “fallen state” is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism.
Just read “Letter from Utopia.” It’s saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you’d find anyone who’d reject the claim that transhumanism is imbued with millennialist tendencies!)
Don’t work on “longtermist” issues, please! You are very right to feel the pull of suffering right now. See this for more: https://www.xriskology.com/mini-book.
“He characterises various long-termists as white supremacists on the flimsiest grounds imaginable.” I would encourage you to contact, well, quite literally anyone who studies “white supremacy.” This is precisely what I did BEFORE making the criticisms I made. Literally every single scholar I spoke with—including some at Princeton—were shocked and appalled by that quote from Nick Beckstead, as well as some other quotes I provided to them (in context, of course). The “white supremacy” claim is not mine, John. I’m just relaying what anyone who studies the issue will tell you, if you were sufficiently curious to contact the relevant scholars. Furthermore, I have never once called you a “white supremacist.” That is an egregious and defamatory lie that you should taken back immediately (or you should provide, for all to see, evidence to the contrary).
- 18 May 2021 9:16 UTC; 46 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
- 18 May 2021 9:31 UTC; 22 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
“He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not).” No, I haven’t, Sean, and you know this from our personal exchanges. I forgot to change the CSER affiliation on FB—and only FB—for a few months after leaving. As soon as you pointed it out, I changed it immediately. Your intellectual dishonesty here is really upsetting.
For whatever it’s worth, I show in a forthcoming, peer-reviewed philosophy paper that Ord’s view is, in fact, worse than Bostrom’s in multiple ways. I will, of course, happily share a link to he document once it’s published (although I know some folks at FHI have a copy right now).
“I argue that while many of the criticisms of Bostrom strike true, newer formulations of longtermism and existential risk – most prominently Ord’s The Precipice (but also Greaves, MacAskill, etc) – do not face the same challenges.”
Again, Sean, more intellectual dishonesty: “I have been informed by Torres that I owe him an apology for not siding with him.” I’m tempted to take screenshots and share them here. These are lies.
How about this for AI publicity, written by Nick Bostrom himself: “You Should Be Terrified of Superintelligent Machines,” via Slate!
http://www.slate.com/articles/technology/future_tense/2014/09/will_artificial_intelligence_turn_on_us_robots_are_nothing_like_humans_and.html