Suffering should not exist.
Question Mark
Hacking Weirdness Points
Brian Tomasik’s essay “Why I Don’t Focus on the Hedonistic Imperative” is worth reading. Since biological life will almost certainly be phased out in the long run and be replaced with machine intelligence, AI safety probably has far more longtermist impact compared to biotech-related suffering reduction. Still, it could be argued that having a better understanding of valence and consciousness could make future AIs safer.
An argument against advocating human extinction is that cosmic rescue missions might eventually be possible. If the future of posthuman civilization converges toward utilitarianism, and posthumanity becomes capable of expanding throughout and beyond the entire universe, it might be possible to intervene in far-flung regions of the multiverse and put an end to suffering there.
5. Argument from Deep Ecology
This is similar to the Argument from D-Risks, albeit more down to Earth (pun intended), and is the main stance of groups like the Voluntary Human Extinction Movement. Human civilization has already caused immense harm to the natural environment, and will likely not stop anytime soon. To prevent further damage to the ecosystem, we must allow our problematic species to go extinct.
This seems inconsistent with anti-natalism and negative utilitarianism. If we ought to focus on preventing suffering, why shouldn’t anti-natalism also apply to nature? It could be argued that reducing populations of wild animals is a good thing, since it would reduce the amount of suffering in nature, following the same line of reasoning as anti-natalism applied to humans.
Even if the Symmetry Theory of Valence turns out to be completely wrong, that doesn’t mean that QRI will fail to gain any useful insight into the inner mechanics of consciousness. Andrew Zuckerman sent me this comment previously on QRI’s pathway to impact, in response to Nuño Sempere’s criticisms of QRI. The expected value of QRI’s research may therefore have a very high degree of variance. It’s possible that their research will amount to almost nothing, but it’s also possible that their research could turn out to have a large impact. As far as I know, there aren’t any other EA-aligned organizations that are doing the sort of consciousness research that QRI is doing.
The way I presented the problem also fails to account for the fact that it seems quite possible there is a strong apocalyptic fermi filter that will destroy humanity, as this could account for why it seems we are so early in the cosmic history (cosmic history is unavoidably about to end). This should skew us more toward hedonism.
Anatoly Karlin’s Katechon Hypothesis is one Fermi Paradox hypothesis that is similar to what you are describing. The basic idea is that if we live in a simulation, the simulation may have computational limits. Once advanced civilizations use too much computational power or outlive their usefulness, they are deleted from the simulation.
If we choose longtermism, then we are almost definitely in a simulation, because that means other people like us would have also chosen longtermism, and then would create countless simulations of beings in special situations like ourselves. This seems exceedingly more likely than that we just happened to be at the crux of the entire universe by sheer dumb luck.
Andrés Gómez Emilsson discusses this sort of thing in this video. The fact that our position in history may be uniquely positioned to influence the far future may be strong evidence that we live in a simulation.
Robin Hanson wrote about the ethical and strategic implications of living in a simulation in his article “How to Live in a Simulation”. According to Hanson, living in a simulation may imply that you should care less about others, live more for today, make your world look more likely to become rich, expect to and try more to participate in pivotal events, be more entertaining and praiseworthy, and keep the famous people around you happier and more interested in you.
If some form of utilitarianism turns out to be the objectively correct system of morality, and post-singularity civilizations converge toward utilitarianism and paradise engineering is tractable, this may be evidence against the simulation hypothesis. Magnus Vinding argues that simulated realities would likely be utopias, and since our reality is not a utopia, the simulation hypothesis is almost certainly false. Thus, if we do live in a simulation, this may imply that either post-singularity civilizations tend to not be utilitarians or that paradise engineering is extremely difficult.
Assuming we do live in a simulation, Alexey Turchin created this map of the different types of simulations we may be living in. Scientific experiments, AI confinement, and education of high-level beings are possible reasons why the simulation may exist in the first place.
Even though there are some EA-aligned organizations that have plenty of funding, not all EA organizations are that well funded. You should consider donating to the causes within EA that are the most neglected, such as cause prioritization research. The Center for Reducing Suffering, for example, has only received £82,864.99 GBP in total funding as of late 2021. The Qualia Research Institute is another EA-aligned organization that is funding-constrained, and believes it could put significantly more funding to good use.
This isn’t specifically AI alignment-related, but I found this playlist on defending utilitarian ethics. It discusses things like utility monsters and the torture vs. dust specks thought experiment, and is still somewhat relevant to effective altruism.
My concern for reducing S-risks is based largely on self-interest. There was this LessWrong post on the implications of worse than death scenarios. As long as there is a >0% chance of eternal oblivion being false and there being a risk of experiencing something resembling eternal hell, it seems rational to try to avert this risk, simply because of its extreme disutility. If Open Individualism turns out to be the correct theory of personal identity, there is a convergence between self-interest and altruism, because I am everyone.
The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal’s Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.
The Center for Reducing Suffering has this list of open research questions related to how to reduce S-risks.
Targeting Celebrities to Spread Effective Altruism
This partially falls under cognitive enhancement, but what about other forms of consciousness research besides increasing intelligence, such as what QRI is doing? Hedonic set-point enhancement, i.e. making the brain more suffering-resistant and research into creating David Pearce’s idea of “biohappiness”, is arguably just as important as intelligence enhancement. Having a better understanding of valence could also potentially make future AIs safer. Magnus Vinding also wrote this post on personality traits that may be desirable from an effective altruist perspective, so research into cognitive enhancement could also include figuring out how to increase these traits in the population.
Regarding the risk of Effective Evil, I found this article regarding ways to reduce the threat of malevolent actors creating these sorts of dsasters.
There was this post that is a list of EA-related organizations. The org update tag also has a list of EA organizations. Nuño Sempere also wrote this list of evaluations of various longtermist EA organizations. As for specific individuals, Wikipedia has a category for people associated with Effective Altruism.
Which leads to the question of how we can get more people to produce promising work in AI safety. There are plenty of highly intelligent people out there who are capable of doing work in AI safety, yet almost none of them do. Maybe trying to popularize AI safety would help to indirectly contribute to it, since it might help to convince geniuses with the potential to work in AI safety to start working on it. It could also be an incentive problem. Maybe potential AI safety researchers think they can make more money by working in other fields, or maybe there are barriers that make it extremely difficult to become an AI safety researcher.
If you don’t mind me asking, which AI safety researchers do you think are doing the most promising work? Also, are there any AI safety researchers who you think are the least promising, or are doing work that is misguided or harmful?
It depends on what you mean by “neglected”, since neglect is a spectrum. It’s a lot less neglected than it was in the past, but it’s still neglected compared to, say, cancer research or climate change. In terms of public opinion, the average person probably has little understanding of AI safety. I’ve encountered plenty of people saying things like “AI will never be a threat because AI can only do what it’s programmed to do” and variants thereof.
What is neglected within AI safety is suffering-focused AI safety for preventing S-risks. Most AI safety research and existential risk research in general seems to be focused on reducing extinction risks and on colonizing space, rather than on reducing the risk of worse than death scenarios. There is also a risk that some AI alignment research could be actively harmful. One scenario where AI alignment could be actively harmful is the possibility of a “near miss” in AI alignment. In other words, risk from AI alignment roughly follows a Laffer curve, with AI that is slightly misaligned being more risky than both a perfectly aligned AI and a paperclip maximizer. For example, suppose there is an AI aligned to reflect human values. Yet “human values” could include religious hells. There are plenty of religious people who believe that an omnibenevolent God subjects certain people to eternal damnation, which makes one wonder if these sorts of individuals would implement a Hell if they had the power. Thus, an AI designed to reflect human values in this way could potentially involve subjecting certain individuals to something equivalent to a Biblical Hell.
Regarding specific AI safety organizations, Brian Tomasik wrote an evaluation of various AI/EA/longtermist organizations, in which he estimated that MIRI has a ~38% chance of being actively harmful. Eliezer Yudkowsky has also harshly criticized OpenAI, arguing that open access to their research poses a significant existential risk. Open access to AI research may increase the risk of malevolent actors creating or influencing the first superintelligence to be created, which poses a potential S-risk.
A major reason why support for eugenically raising IQs through gene editing is low in Western countries could be a backlash against Nazism, since Nazism is associated with eugenics in the mind of the average person. The low level of support in East Asia is more uncertain. One possible explanation is that East Asians have a risk-averse culture.
Interestingly, Hindus and Buddhists also have some of the highest rates of support for evolution among any religious groups. There was a poll from 2009 that showed that 80% of Hindus and 81% of Buddhists in the United States accept evolution, while only 48% of the total US population accepts evolution. Another poll showed that 77% of Indians believe that there is significant evidence to support evolution. The high rate of acceptance of gene editing technology among Hindu Indians could therefore be a reflection of greater acceptance of science in general.
As a side note, I found this poll of public opinion of gene editing in different countries. India apparently has the highest rate of social acceptance of using gene editing to increase intelligence of any of the countries surveyed. This could have significant geopolitical implications, since the first country or countries to practice gene editing for higher intelligence could have an enormous first-mover advantage. Whatever countries start practicing gene editing for higher intelligence will have far more geniuses per capita, which will greatly increase levels of innovation, soft power, effective governance, and economic efficiency in general. The countries that increase their intelligence through gene editing will likely end up having a massive advantage over countries that don’t.
80,000 Hours has this list of what they consider to be the most pressing world problems, and this list ranking different cause areas by importance, tractability, and uncrowdedness. As for lists of specific organizations, Nuño Sempere created this list of longtermist organizations and evaluations of them, and I also found this AI alignment literature review and charity comparison. Brian Tomasik also wrote this list of charities evaluated from a suffering-reduction perspective.