Student in mathematics.
Gabin
I also felt a bit queasy when I read it, but I don’t think that it is a bad reaction. When an area is so neglected that it’s hard to find great reviews, being enthusiastic when someone does some very good work is probably a good first step. Also, even though some emotional empathy can be a good way to stay motivated, too much of it would be crushing, especially when we talk about the suffering of billions or trillions of beings. I don’t think that feeling enough empathy should be a goal. But I understand your feelings, and they make sense. I’m also mostly sad.
I disagree with your assessment of the reactions in EA (I can recognize some symptoms that you mention, but it doesn’t seem to be the majority of people), but I thank you for helping the world, and I hope your disgust will diminish a bit and you will want to continue to participate in EA.
I’m quite skeptical of the concept of an ASI that is interacting with us but not controlling us. If it can predict the impact of each of its actions, it basically chooses the future of our species. So either we become an ASI (cognitive enhancement, mind uploading,...) or we have an ASI that controls us, and then the goal is that it controls us benevolently. But putting that aside, my answer is yes, seems good by definition of ‘benevolently’.
International institutions cannot make it so that weapons are not deployed. They failed at controlling nuclear weapons, and this was multiple orders of magnitude easier than controlling bio weapons, and that’s only the technologies we know of. Each year, it becomes easier for a small group of smart people to destroy humanity. Moreover, the advancements in AI today make it easier to manipulate and control massively via internet, and so I put a probability of at least 30% that the world will become less stable, not more, even without considering anything new outside of bio/nuclear.
For the risk of AGI being used to torture people, I’m not entirely sure of my position, but I think that the anti-alignment (creating an AGI to torture people) is as hard as alignment, because it has the same problems. Moreover, my guess is that people that want to torture people will be a lot less careful than good people, and so AGI torturing people because it was developped by malevolent humans is a lot less probable than AGI being good because it was developped by good humans, and so the expected value is still positive. However, there is a similar risk that I find more worrying: the uncanny valley of almost alignment. it is possible that near misses are a lot worse than complete misses, because we would an AGI keeping us alive and conscious, but in a really bad way, and it is possible that ending up in the uncanny valley is more probable than solving alignment, and that would mean that AGI has a negative expected value.
In the following, I consider strong cognitive enhancement as a form of AGI.
AGI not being developped is a catastrophically bad outcome, since humans will still be able to develop bio and nuclear weapons and other things that we don’t know yet, and therefore I put a rather small probability that we survive the next 300 years without AGI, and an extremally small probability that we survive the next 1000 years. This means, in particular, no expansion through out the galaxy, so not developing AGI implies that we kill almost all the potential people.
However, if I could stop AGI research for 30 years, I would do it, so that alignment research can perhaps catch up.
An important example that nobody mentioned is Harry Potter and the Methods of Rationality. It seems to me like a very good example of art on rationality and trying to create a better world. I don’t know if it is useful to bring other people, but it can serve as a source of motivation, and create a sense of community for people who liked it and can reference it in conversations. I think that stories are very good at that, and I like that as a community we’re trying to promote more fictional writings.
On the other hand, I think that it is good that the forum is very clean and minimalist. It is easy to participate in a community in which some members created some stories that you dislike, but it is harder if the “official” aesthetic puts you off.
I think that it is very difficult to beat community building (even non-professionally, just in some spare time) by having children. There is also an exponential growth in the effect, and probably with a greater growth factor. Moreover, the similarity between you and your grandchildren is probably not that important, and it is very plausible to me that their effectiveness decreases sufficiently quickly so that it doesn’t even beat direct effects that you could have instead.
This new organization seems to be mostly focused on economic and sustainability issues. Are you also interested in the animal welfare side of it? What is your point of view on that?