To be clear, everything they complain about was after I left the project (so far as I know). I was as surprised as anyone else to read Zoe’s EA Forum post—I hadn’t even seen a draft of it, and didn’t know she’d written it. Their complaints had nothing to do with me having worked on an early draft of the paper!
philosophytorres
Friends: I recently wrote a few thousand words on the implications that a Trump presidency will have for global risk. I’m fairly new to this discussion group, so I hope posting the link doesn’t contravene any community norms. Really, I would eagerly welcome feedback on this. My prognosis is not good.
No one knew I was involved, though. Honestly. All that happened after I’d moved on. I was as surprised as everyone else to read Zoe’s EA Forum post.
Also worth noting that I’d mentioned our original collaboration to many people in the community prior to that tweet. This isn’t new information.
Have you seen my papers on the topic, by chance? One is published in Inquiry, the other is forthcoming. Send me an email if you’d like!
You don’t even have the common courtesy of citing the original post so that people can decide for themselves whether you’ve accurately represented my arguments (you haven’t). This is very typical “authoritarian” (or controlling) EA behavior in my experience: rather than given critics an actual fair hearing, which would be the intellectually honest thing, you try to monopolize and control the narrative by not citing the original source, and then reformulating all the arguments while at the same time describing these reformulations as “steelmanned” versions (which some folks who give EA the benefit of the doubt might just accept), despite the fact that the original author (me) thinks you’ve done a truly abysmal job at accurately presenting the critique. As mentioned, this will definitely get cited in a forthcoming article; it really does embody much of what’s epistemically wrong with this community.
Very interesting map. Lots of good information.
A fantastically interesting article. I wish I’d seen it earlier—about the time this was published (last February) I was completing an article on “agential risks” that ended up in the Journal of Evolution and Technology. In it, I distinguish between “existential risks” and “stagnation risks,” each of which corresponds to one of the disjuncts in Bostrom’s original definition. Since these have different implications—I argue—for understanding different kinds of agential risks, I think it would be good to standardize the nomenclature. Perhaps “population risks” and “quality risks” are preferable (although I’m not sure “quality risks” and “stagnations risks” have exactly the same extension). Thoughts?
(Btw, the JET article is here: http://jetpress.org/v26.2/torres.pdf.)
Oh, I see. Did they not ask for his approval? I’m familiar with websites devising their own outrageously hyperbolic headlines for articles authored by others, but I genuinely assumed that a website as reputable as Slate would have asked a figure as prominent as Bostrom for approval. My apologies!
[Responding to Alex HT above:]
I’ll try to find the time to respond to some of these comments. I would strongly disagree with most of them. For example, one that just happened to catch my eye was: “Longtermism does not say our current world is replete with suffering and death.”
So, the target of the critique is Bostromism, i.e., the systematic web of normative claims found in Bostrom’s work. (Just to clear one thing up, “longtermism” as espoused by “leading” longtermists today has been hugely influenced by Bostromism—this is a fact, I believe, about intellectual genealogy, which I’ll try to touch upon later.)
There are two main ingredients of Bostromism, I argue: total utilitarianism and transhumanism. The latter absolutely does indeed see our world the way many religious traditions have: wretched, full of suffering, something to ultimately be transcended (if not via the rapture or Parousia then via cyborgization and mind-uploading). This idea, this theme, is so prominent in transhumanist writings that I don’t know how anyone could deny it.
Hence, if transhumanism is an integral component of Bostromism (and it is), and if Bostromism is a version of longtermism (which it is, on pretty much any definition), then the millennialist view that our world is in some sort of “fallen state” is an integral component of Bostromism, since this millennialist view is central to the normative aspects of transhumanism.
Just read “Letter from Utopia.” It’s saturated in a profound longing to escape our present condition and enter some magically paradisiacal future world via the almost supernatural means of radical human enhancement. (Alternatively, you could write a religious scholar about transhumanism. Some have, in fact, written about the ideology. I doubt you’d find anyone who’d reject the claim that transhumanism is imbued with millennialist tendencies!)
Sloppy scholarship. Please do take a look, if you have a moment: https://www.salon.com/2019/01/26/steven-pinkers-fake-enlightenment-his-book-is-full-of-misleading-claims-and-false-assertions/.
International Criminal Law and the Future of Humanity: A Theory of the Crime of Omnicide
EA Forum moderators take note: I believe the individual above is the same who created these two Twitter accounts just a few days ago, both of which were used to harass me on Twitter: https://twitter.com/xriskology/status/1569401706999595009. I have screenshots of many of our exchanges if you’d like. Harassment on social media should warrant being banned from this website, especially when the harasser continues to conceal their identity. Please act.
(EDIT: Please note also that this “throwaway” account was created just this month. Are you, as a community, okay with people creating anonymous Twitter accounts and anonymous EA Forum accounts to share misleading and out-of-context screenshots about someone? If so, I’ll make a note of it.)
John: Do I have your permission to release screenshots of our exchange? You write: ”… including persistently sending me messages on Facebook.” I believe that this is very misleading.
- 12 May 2021 11:01 UTC; 84 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
Virtually every point here misrepresents what I wrote. I commend your take-down of various straw men, but you really did miss the main thrust (and details) of the critique. I suspect that you would (notably) fail an Ideological Turing Test.
- 18 May 2021 9:31 UTC; 22 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
Were the Great Tragedies of History “Mere Ripples”?
How about this for AI publicity, written by Nick Bostrom himself: “You Should Be Terrified of Superintelligent Machines,” via Slate!
One is here: https://docs.wixstatic.com/ugd/d9aaad_64ac5f0da7ea494ab48f54181b249ce4.pdf. And my critique of the radical utopianism and valuation of imaginary lives that undergirds the most prominent notion of “existential risk” today is here: https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_33466a921b2646a7a02482acb89b07b8.pdf
As it happens, I found numerous cases of truly egregious cherry-picking, demonstrably false statements, and (no, I’m not kidding) out-of-context mined quotes in just a few pages of Pinker’s “Enlightenment Now.” Take a look for yourself. The terrible scholarship is shocking. https://docs.wixstatic.com/ugd/d9aaad_8b76c6c86f314d0288161ae8a47a9821.pdf
Wow, this is absolutely stunning. I can’t myself participate, but I genuinely hope this project takes off. I’m sure you’re familiar with the famous (but not demolished) Building 20 at MIT: https://en.wikipedia.org/wiki/Building_20. It provided a space for interdisciplinary work—and wow, the results were truly amazing.