Have you seen my papers on the topic, by chance? One is published in Inquiry, the other is forthcoming. Send me an email if you’d like!
philosophytorres
Were the Great Tragedies of History “Mere Ripples”?
philosophytorres’s Quick takes
John: Do I have your permission to release screenshots of our exchange? You write: ”… including persistently sending me messages on Facebook.” I believe that this is very misleading.
- May 12, 2021, 11:01 AM; 85 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
You don’t even have the common courtesy of citing the original post so that people can decide for themselves whether you’ve accurately represented my arguments (you haven’t). This is very typical “authoritarian” (or controlling) EA behavior in my experience: rather than given critics an actual fair hearing, which would be the intellectually honest thing, you try to monopolize and control the narrative by not citing the original source, and then reformulating all the arguments while at the same time describing these reformulations as “steelmanned” versions (which some folks who give EA the benefit of the doubt might just accept), despite the fact that the original author (me) thinks you’ve done a truly abysmal job at accurately presenting the critique. As mentioned, this will definitely get cited in a forthcoming article; it really does embody much of what’s epistemically wrong with this community.
Your “steelmanning” is abysmal, in my opinion. It really doesn’t represent the substance of my criticisms. I will definitely be citing this post in a forthcoming journal paper on the issue.
Virtually every point here misrepresents what I wrote. I commend your take-down of various straw men, but you really did miss the main thrust (and details) of the critique. I suspect that you would (notably) fail an Ideological Turing Test.
- May 18, 2021, 9:31 AM; 22 points) 's comment on Response to Torres’ ‘The Case Against Longtermism’ by (
Sloppy scholarship. Please do take a look, if you have a moment: https://www.salon.com/2019/01/26/steven-pinkers-fake-enlightenment-his-book-is-full-of-misleading-claims-and-false-assertions/.
As it happens, I found numerous cases of truly egregious cherry-picking, demonstrably false statements, and (no, I’m not kidding) out-of-context mined quotes in just a few pages of Pinker’s “Enlightenment Now.” Take a look for yourself. The terrible scholarship is shocking. https://docs.wixstatic.com/ugd/d9aaad_8b76c6c86f314d0288161ae8a47a9821.pdf
Wow, this is absolutely stunning. I can’t myself participate, but I genuinely hope this project takes off. I’m sure you’re familiar with the famous (but not demolished) Building 20 at MIT: https://en.wikipedia.org/wiki/Building_20. It provided a space for interdisciplinary work—and wow, the results were truly amazing.
Friends: I recently wrote a few thousand words on the implications that a Trump presidency will have for global risk. I’m fairly new to this discussion group, so I hope posting the link doesn’t contravene any community norms. Really, I would eagerly welcome feedback on this. My prognosis is not good.
A fantastically interesting article. I wish I’d seen it earlier—about the time this was published (last February) I was completing an article on “agential risks” that ended up in the Journal of Evolution and Technology. In it, I distinguish between “existential risks” and “stagnation risks,” each of which corresponds to one of the disjuncts in Bostrom’s original definition. Since these have different implications—I argue—for understanding different kinds of agential risks, I think it would be good to standardize the nomenclature. Perhaps “population risks” and “quality risks” are preferable (although I’m not sure “quality risks” and “stagnations risks” have exactly the same extension). Thoughts?
(Btw, the JET article is here: http://jetpress.org/v26.2/torres.pdf.)
Oh, I see. Did they not ask for his approval? I’m familiar with websites devising their own outrageously hyperbolic headlines for articles authored by others, but I genuinely assumed that a website as reputable as Slate would have asked a figure as prominent as Bostrom for approval. My apologies!
Very interesting map. Lots of good information.
How about this for AI publicity, written by Nick Bostrom himself: “You Should Be Terrified of Superintelligent Machines,” via Slate!
One is here: https://docs.wixstatic.com/ugd/d9aaad_64ac5f0da7ea494ab48f54181b249ce4.pdf. And my critique of the radical utopianism and valuation of imaginary lives that undergirds the most prominent notion of “existential risk” today is here: https://c8df8822-f112-4676-8332-ad89713358e3.filesusr.com/ugd/d9aaad_33466a921b2646a7a02482acb89b07b8.pdf