I think this is a good idea, however:
I was initially confused until I realized you meant hair. According to Google hear isn’t a word used for that purpose, the correct spelling is hair.
I think this is a good idea, however:
I was initially confused until I realized you meant hair. According to Google hear isn’t a word used for that purpose, the correct spelling is hair.
I’d like to underline that I’m agnostic, and I don’t know what the true nature of our reality is, though lately I’ve been more open to anti-physicalist views of the universe.
For one, if there’s a continuation of consciousness after death then AGI killing lots of people might not be as bad as when there is no continuation of consciousness after death. I would still consider it very bad, but mostly because I like this world and the living beings in it and would not like them to end, but it wouldn’t be the end of consciousnesses like some doomy AGI safety people imply.
Another thing is that the relationship between consciousness and the physical universe might be more complex than physicalists say—like some of the early figures of quantum physics thought—and there might unknown to current science factors at play that could have an effect on the outcome. I don’t have more to say about this because I’m uncertain what the relationship between consciousness and the physical universe might be in such a view.
And lastly, if there’s God or gods or something similar, such beings would have agency and could have an effect on what the outcome might be. For example, there are Christian eschatological views that say that the Christian prophecies about the New Earth and other such things must come true in some way, so the future cannot end in a total extinction of all human life.
Do the concepts behind AGI safety only make sense if you have roughly the same worldview as the top AGI safety researchers—secular atheism and reductive materialism/physicalism and a computational theory of mind?
You may be aware of this already, but I think there is a clear difference between saving an existing person who would otherwise have died—and in the process reducing suffering by also preventing non-fatal illnesses—and starting a pregnancy because before starting a pregnancy the person doesn’t exist yet.
There are a couple of debate ideas I have, but I would most like to see a debate on whether ontological physicalism is the best view of the universe there is.
I would like to see someone like the theoretical physicist Sean Carroll represent physicalism, and someone like the professor Edward F. Kelly from the Division of Perceptual Studies at the University of Virginia represent anti-physicalism. The researchers at the Division of Perceptual Studies study near-death experiences, claimed past-life memories in children and other parapsychological phenomena, and Edward F. Kelly has written three long books on why he thinks physicalism is false, relying largely on case studies that he says don’t fit well with the physicalistic worldview. Based on my understanding, the mainstream scientific community treats the research by the Division of Perceptual Studies as fringe science.
I’m personally agnostic, but I have thought about making an efforpost on steelmanning anti-physicalism based on Edward F. Kelly’s works for LessWrong, but I have doubted whether there would be any kind of interest for it because the people at LessWrong seem to be very certain of physicalism and think lowly of other positions. If you think there would be interest for it, you can say so. Physicalism has very good arguments for it, and the anti-physicalist position relies on non-verifiable case studies being accurate.
The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson wasn’t mentioned.
The movie Joker makes a good case that many criminals are created by circumstances, like mental illness, abuse and lack of support from society and other people. I still believe in some form of free will and moral responsibility of an individual, but criminals are also to some extent just unlucky.
You could study subjects, read books, watch movies and play video games, provided that these things are available. But I personally think that Buddhism is particularly optimized for solitary life, so I’d meditate, observe my mind and try to develop it and read Buddhist teachings. Other religions could also work, at least Christianity has had hermits.
What would you say is the core message of the Sequences? Naturalism is true? Bayesianism is great? Humans are naturally very irrational and have to put effort if they want to be rational?
I’ve read the Sequences almost twice, first time was fun because Yudkowsky was optimistic back then, but during the second time I was constantly aware that Yudkowsky believes along the lines of his ‘Death with dignity’ post that our doom is virtually certain and he has no idea how to even begin formulate a solution. If Yudkowsky, who wrote the Sequences on his own, who founded the modern rationalist movement on his own, who founded MIRI and the AGI alignment movement on his own, has no idea where to even begin looking for a solution, what hope do I have? I probably couldn’t do anything comparable to those things on my own even if I tried my hardest for 30 years. I could thoroughly study everything Yudkowsky and MIRI have studied, which would be a lot, and after all that effort I would be in the same situation Yudkowsky is right now—no idea where to even begin looking for a solution and only knowing which approaches don’t work. The only reason to do it is to gain a fraction of a dignity point, to use Yudkowsky’s way of thinking.
To be clear, I don’t have a fixed model in my head about AI risk, I think I can sort of understand what Yudkowsky’s model is and I can understand why he is afraid, but I don’t know if he’s right because I can also sort of understand the models of those who are more optimistic. I’m pretty agnostic when it comes to this subject and I wouldn’t be particularly surprised by any specific outcome.
I’ve been studying religions a lot and I have the impression that monasteries don’t exist because the less fanatic members want to shut off the more fanatic members from rest of society so they don’t cause harm. I think monasteries exist because religious people really believe in the tenets of their religion and think that this is the best way for some of them to follow their religion and satisfy their spiritual needs. But maybe I’m just naive.
Does anyone here know why Center for Human-Compatible AI hasn’t published any research this year even though they have been one of the most prolific AGI safety organizations in previous years?
How tractable are animal welfare problems compared to global health and development problems?
I’m asking because I think animal welfare is a more neglected issue, but I still donate for global health and development because I think it’s more tractable.
Center for Reducing Suffering is longtermist, but focuses on the issues this article is concerned about. Suffering-focused views are not very popular though, and I agree that most longtermist organizations and individuals seem to be focused on future humans more than future non-human beings, at least that’s my impression, I could be wrong. Center on Long-Term Risk is also longtermist, but focused on reducing suffering among all future beings.
Thank you for answering, your reasoning makes sense if longterm charities have a higher expected impact when taking into account the uncertainty involved.
Thank you for answering, I subscribed to that tag and I will take a closer look at those threads.
Thank you for taking the time to answer my question. What you said makes a lot of sense, but I just feel that the future is inherently unpredictable and I don’t think I can handle the risk factor so much.
Hi, I’ve been interested in EA for years, but I’m not a heavyhitter. I’m expecting to give only dozens of thousands of dollars during my life.
That said, I have a problem and I’d like some advice on how to solve it: I don’t know whether to focus on shortterm organizations like Animal Charity Evaluators and Givewell or longterm organizations like Machine Intelligence Research Institute, Center for Reducing Suffering (CRS), Center on Longterm Risk (CLR), Longterm Future Fund, Clean Air Task Force and so on. It feels like longterm organizations are a huge gamble, and if they don’t achieve their goals, I will feel like I’ve wasted my money. On the other hand, shorterm organizations don’t really focus on the big picture and it’s uncertain whether their actions will reliably reduce the amount of suffering in the universe unlike how CRS or CLR claim to do. What do you think?
I’m wondering what Nick Bostrom’s p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence “less than 50% risk of doom”. In this interview 4 months ago he said that it’s good there has been more focus on risks in recent times, but there’s still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might “overshoot” and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be because of this public awareness of the risks.