Suffering should not exist.
Question Mark
These aren’t exactly memes, but here are a few images I generated in Craiyon involving EA-related topics.
Suffering risks have the potential to be far, far worse than the risk of extinction. Negative utilitarians and EFILists may also argue that human extinction and biosphere destruction may be a good thing or at least morally neutral, since a world with no life would have a complete absence of suffering. Whether to prioritize extinction risk depends on the expected value of the far future. If the expected value of the far future is close to zero, it could be argued that improving the quality of the far future in the event we survive is more important than making sure we survive.
A P-zombie universe could be considered a good thing if one is a negative utilitarian. If a universe lacks any conscious experience, it would not contain any suffering.
Why I’m skeptical of moral circle expansion as a cause area
A lot of people will probably dismiss this due to it being written by a domestic terrorist, but Ted Kaczynski’s book Anti-Tech Revolution: Why and How is worth reading. He goes into detail on why he thinks the technological system will destroy itself, and why he thinks it’s impossible for society to be subject to rational control. He goes into detail on the nature of chaotic systems and self-propagating systems, and he heavily criticizes individuals like Ray Kurzweil. Robin Hanson critiqued Kaczynski’s collapse theory a few years ago on Overcoming Bias. It’s an interesting read if nothing else, and has some interesting arguments.
I suspect there’s a good chance that populations in Western nations could be significantly higher than predicted according to your link. The reason for this is that we should expect natural selection to select for whatever traits maximize fertility in the modern environment, such as higher religiosity. This will likely lead to fertility rates rebounding in the next several generations. The sorts of people who aren’t reproducing in the modern environment are being weeded out of the gene pool, and we are likely undergoing selection pressure for “breeders” with a strong instinctive desire to have as many biological children as possible. Certain religious groups, like the Old Order Amish, Hutterites, and Haredim are also growing exponentially, and will likely be demographically dominant in the future.
Would you mind posting a link to it?
Do you know of any estimates of the impact of more funding for AI safety? For instance, how much would an additional $1,000 increase the odds of the AI control problem being solved?
Here’s a chart of the amount of suffering caused by different animal foods that Brian Tomasik created. Farmed fish may have even more negative utility than chicken, since they are small and therefore require more animals per unit of meat. The chart is based on suffering per unit of edible food produced rather than suffering throughout the total population, and I’m not sure what the population of farmed fish is relative to the population of chickens. Chicken probably has more negative utility than fish if the chicken population is substantially higher than the farmed fish population. Beef is probably the meat with the lest negative utility.
Vegetarians/vegans should consider promoting eating only beef/dairy as the only animal products they consume as a potential strategy to have people cause less suffering to livestock with a high retention rate. I suspect that the average person would be much more willing to give up most animal products while still consuming beef and dairy, compared to giving up meat entirely. Since cows are big, fewer animals are needed to produce a single unit of meat, compared to meat coming from smaller animals. Vitalik Buterin has argued that eating big animals as an animal welfare strategy could be 99% as good as veganism. Brian Tomasik also compiled this list of different animal products ranked by the amount of suffering they cause per kilogram, and beef and milk are at the bottom.
An objection people might make to this is that eating more beef could contribute to climate change, but I’m skeptical that the amount of additional suffering caused by climate change will exceed the amount of suffering reduced by having less factory farming. It could also be argued that habitat loss may reduce wild animal populations, which may reduce wild animal suffering by preventing wild animals by being born.
As a side note, there needs to be some sort of name for the philosophy of eating big animals to reduce livestock suffering described above. Sizeatarianism? Beefatarianism? Big-animal-atarianism? Sufferingatarianism?
… which arguably gives circumcised males the benefit of longer sex ;-)
Not necessarily. Male circumcision may actually cause premature ejaculation in some men.
More seriously: FGM can cause severe bleeding and problems urinating, and later cysts, infections, as well as complications in childbirth and increased risk of newborn deaths (WHO).
Other than complications in childbirth, male circumcision can also cause all of these complications. According to Ayaan Hirsi Ali, who is herself a victim of FGM, boys being circumcised in Africa have a higher risk of complications compared to girls subjected to FGM. Circumcisions/mutilations in Africa are often performed in unsanitary conditions, which is true for both boys and girls subjected to genital mutilation.
In the same vein, comparing female genital mutilation to forced circumcision is… let’s say ignorant of the effects of FGM.
This lecture by Eric Clopper has a decent analysis of the differences between male circumcision and FGM. Male circumcision removes more erogenous tissue and more nerve endings than most forms of FGM.
While it’s true that women are more likely to be victims of sexual violence, men are more likely to be victims of non-sexual violence, such as murder and aggravated assault.
How does this compare to violence against men and boys as a cause area? Worldwide, 78.7% of homicide victims are men. Female genital mutilation is also generally recognized as being a human rights violation, while forced circumcision of boys is still extremely prevalent worldwide. For various social reasons, violence against males seems to be a more neglected cause area compared to violence against females.
How’s this argument different from saying, for example, that we can’t rule out God’s existence so we should take him into consideration? Or that we can’t rule out the possibility of the universe being suddenly magically replaced with a utilitarian optional one?
If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells resulting from bad AI alignment, and hells resulting from living in a simulation. Hells resulting from bad AI alignment can be plausibly avoided by contributing in some way to solving the AI alignment problem. It’s not clear how hells resulting from living in a simulation could be avoided, but it’s possible that ways to avoid these sorts of hells could be discovered with further analysis of different theoretical types of simulations we may be living in, such as in this map. Robin Hanson explored some of the potential utilitarian implications of the simulation hypothesis in his article How To Live In A Simulation. Furthermore, mind enhancement could potentially reduce S-risks. If you manage to improve your general thinking abilities, you could potentially discover a new way to reduce S-risks.
A Christian or a Muslim could argue that you ought to convert to their religions in order to avoid going to hell. But a problem with Pascal’s Wager-type arguments is the issue of tradeoffs. It’s not clear that practicing a religion is the most optimal way to avoid hell/S-risks. The time spent going to church, praying, and otherwise being dedicated to your religion is time not spent thinking about AI safety and strategizing ways to avoid S-risks. Working on AI safety, strategizing ways to avoid S-risks, and trying to improve your thinking abilities would probably be more effective at reducing your risk of going to some sort of hell than, say, converting to Christianity would.
The linked post is basically a definition of what “survival” means, without any argument on how any of it is at all plausible.
It mentions finding ways to travel to other universes, send information to other universes, creating a superintelligence to figure out ways to avoid heat death, convincing the creators of the simulation to not turn it off, etc. While these hypothetical ways to survive heat death do involve a lot of speculative physics, they are more than just “defining survival”.
I believe neither is plausible by mistake.
Yet we live in a reality where happiness and suffering exist seemingly by mistake. Your nervous system is the result of millions of years of evolution, not the result of an intelligent designer.
the scope is surely not infinite. The heat death of the universe and the finite number of atoms in it pose a limit.
We can’t say for certain that travel to other universes is impossible, so we can’t rule it out as a theoretical possibility. As for the heat death if the universe, Alexey Turchin created this chart of theoretical ways that the heat death of the universe could be survivable by our descendants.
Unless you think unaligned AIs will somehow be inclined to not only ignore what people want, but actually keep them alive and torture them—which sounds implausible to me—how’s this not Pascal’s mugging?
The entities that are being subjected to the torture wouldn’t necessarily be “people” per se. I am talking about conscious entities in general. Solving the alignment problem from the perspective of hedonistic utilitarianism would involve the superintelligence having consciousness-centric values and the ability to create and preserve conscious states with high levels of valence. If a superintelligence with consciousness-centric values that can create large amounts of bliss is realistically possible, the possibility of a consciousness-centric superintelligence that creates large amounts of suffering isn’t necessarily that much less realistic. If you believe that a superintelligence causing torture is implausible, you also have to accept that a superintelligence creating a utopia is also implausible.
Suffering risks. S-risks are arguably a far more serious issue than reducing the risk of extinction, as the scope of the suffering could be infinite. The fact that there is a risk of a maligned superintelligence creating a hellish dystopia on a cosmic scale with more intense suffering than has ever existed in history means that even if the risk of this happening is small, this is balanced by its extreme disutility. S-risks are also highly neglected, relative to their potential extreme disutility. It could even be argues that it would be rational to completely dedicate your life to reducing S-risks because of this. The only organizations I’m aware of that are directly working on reducing S-risks are the Center on Long-Term Risk and the Center for Reducing Suffering. One possible way AI could lead to astronomical suffering is if there is a “near miss” in AI alignment, where the AI alignment problem is partially solved, but not entirely. Other potential sources of S-risks may include malevolence, or an AI that includes religious hells when aligned to reflect the values of humanity.
Alignment being solved at all would require alignment being solvable with human-level intelligence. Even though IQ-augmented humans wouldn’t be “superintelligent”, they would have additional intelligence that they could use to solve alignment. Additionally, it probably takes more intelligence to build an aligned superintelligence than it does to create a random superintelligence. Without alignment, chances are that the first superintelligence to exist will be whatever superintelligence is the easiest to build.