Hi, as an anti-natalist: while I saw the climate change branded as the leading motivation for anti-natalism, I don’t think that anti-natalists should first and foremost be regarded as motivated in their views by the climate change concerns.
Michał Zabłocki
Paradoxically, I don’t have any concrete title in mind, but perhaps some science-fiction story could be supplied somewhere in the course? 2001 Space Odyssey as some most basic example.
You’re of course entitled to your own opinion but this post mostly comes off as condescending with you claiming to know what’s best for other people, in particular for them to quit their prestigious jobs (rather than, say, quit smoking).
Is it you, the author? Because your worship of him is akin to Michael’s worship of SBF.
my guess is most people competent to review philosophy paper either hate Rand or have never read her
I believe that to be true, and to be a very good sign of what kind of an ivory tower philosophy has become.
As a layman: first and foremost correlation that pops in my mind is high reserves ~ responsible spendings. A charity so rich it is dumping money onto buying castles won’t get this negative badge, neither will one that isn’t buying castles but still is set on course to let go their employees the moment the funding slows down.
“everyone but cis men” is a pretty vile policy
I see it’s indeed page 83 in the document on arxiv; it was 82 in the pdf on OpenAI website
Ok, I should have been clear in the beginning—what struck me was that the first example was essentially answering the question on doing great harm with minimum spendings—a really wicked “evil EA”, I would say. I found it somewhat ironic.
Scroll down to page 82. No spoilers.
Also, I’ve noticed that MacAskill’s book in bibliography—but just as a general reference I would say. Haven’t spotted any other major philosophical works.
While I agree that treating extreme pain is definitely in line with NU, a person struggling with major depression, I believe, usually is quite dubious about their efficacy and potential to achieve such goals. You can’t work on ending factory farming if you can’t even get out of bed, plainly speaking.
Hi Geoffrey—I’ve found your work very interesting and hence I respect your authority, but at the same time I can’t fully agree. For me, reading Perry felt honestly great, that someone perhaps could hold similar views that I hold, that someone would actually agree with me on certain things, that I was not all alone in the world. And in the end—both Perry and me lead a fairly happy life, I think. No one would arrive at her or Benatar’s writings accidentally—and if they did, they wouldn’t find them appealing.
But that was a sidenote. My major arguement is: I don’t deny that most people are net happy. I just think that the price of those suffering is a really high one to pay—one unworthy paying.
I’ve been attracted to this idea my whole adult life. However:
an actual attempt to pursue it would probably have quite awful consequences instead of the good ones imagined (simplest case possible to realise: me ending my own suffering would create suffering for my close ones) Killing other people—there’s no magic annihilation button, so that’s probably not going to end well either. Perhaps something like legalising euthanasia could actually successfully reduce suffering rather than accidentally increase it.
as the previous point may hint already, I don’t think this philosophy is a very healthy one to hold—or rather, I believe it’s a result of a mind troubled with suffering. So it’s not that it changed my mind, but rather it was something I naturally looked for and arrived at thanks to being depressed—and I didn’t enjoy the journey very much.
I honestly don’t feel I’m anywhere near competent to evaluate how good anyone is as a director of some institute.
I’ve never seriously entertained the idea that EA is like a sect—until now. This is really uncanny.
There have been so many posts on this already—and, oh, here’s another one—seeing the Apology part makes up most of the post. Here’s an opinion from outside: the Apology is not anywhere close to being this serious of an issue it is presented. I’ll say more: many people would even argue the original post Bostrom issued the apology for wasn’t particularly bad. Because many people worldwide may well hold views that here are seen as abhorrent. I’m not sure this holier than thou attitude on the forum is all that beneficial.
I know outing people who are gay can get them as far as murdered in places where homophobia is most rampant. On the other hand, polyamory seems to be a fairly popular model in the rich Bay Area community, often not kept secretive and without such repercussions. So I’m reserved whether this comparison is fair.
I studied philosophy—but I don’t get the argument. Furthermore, I don’t think there’s any such X such that X resolves population ethics.
So, uh, does it follow that human extinction [or another x-risk which is not an s-risk] realised could be desired in order to avoid an s-risk? (e.g. VHEMT)